The Next Frontier in Computing: AI, Automation, and the Digital Age
The next decade and a half will see technology giants embrace machine learning and AI at scale, driving massive demand for compute power, specialised hardware such as GPUs, and investment opportunities across software, semiconductors, and critical materials.
As the global economy continues its steady march into the digital age, it is becoming increasingly clear that computing power, not oil, not labour, and not even capital, will define the next major technological cycle. Over the next decade and beyond, the world’s largest technology companies are expected to undergo a fundamental shift in how they design, deploy, and monetise software. At the centre of this shift lies machine-based learning systems, automation, and what is increasingly referred to as artificial intelligence.
Companies such as Microsoft, Amazon, and Google already sit atop vast oceans of data. Over the next decade, it is likely these firms will seek to extract far greater value from that data by automating analysis, decision-making, and information retrieval. Doing so, however, will require compute capacity on a scale that far exceeds today’s enterprise systems.
This shift will place unprecedented demand on server farms and data centres. General-purpose processors may no longer be sufficient. Instead, chipsets will need to be modified, customised, or purpose-built to handle large-scale parallel computation. As computing moves away from simple instruction execution toward pattern recognition, statistical inference, and probabilistic modelling, hardware will once again become the limiting factor, and therefore the opportunity.
Hardware as the Backbone of the Digital Economy
While software often captures public attention, history suggests that every major technological revolution is first enabled by hardware. The personal computer boom of the 1980s, the internet expansion of the 1990s, and the mobile revolution of the early 2000s all followed this pattern. Artificial intelligence is unlikely to be any different.
Existing hardware manufacturers are particularly well positioned. Companies such as AMD and Nvidia already possess the fabrication relationships, engineering talent, and production scale required to respond quickly to new compute demands. Graphics processors, in particular, are increasingly recognised as efficient engines for parallel mathematical operations, precisely the kind of workload that machine learning systems require.
As silicon technology continues to improve, and transistor density increases over the next decade and a half, we should reasonably expect at least an order-of-magnitude increase in available compute power. Combined with architectural improvements and specialised processing units, this may be sufficient to move artificial intelligence from theoretical research labs into practical, commercial deployment.
What Will AI Look Like?
In its earliest forms, artificial intelligence is unlikely to resemble human cognition in any meaningful sense. Instead, it will be computationally expensive, inefficient, and heavily reliant on brute force. Early AI systems will require enormous processing power simply to function at a basic level, and the software that drives them will likely be inelegant by necessity.
The most practical approach, given current technological constraints, will be the use of extremely large datasets. In this sense, early AI systems may resemble an extension of existing search technologies. Today, search engines such as Google do not “understand” information in a human way; they index, categorise, and retrieve vast quantities of data at remarkable speed. Initial AI programs will build upon this concept, automating the organisation and synthesis of information to produce outputs that appear intelligent.
As a result, early AI applications may focus on tasks such as answering questions, summarising information, or producing structured responses far more quickly than a human could through manual research. With sufficient compute power and data, it is reasonable to expect users to issue commands such as:
“Write me a short story.”
“Answer these mathematics questions and show all working.”
“Write a comedy script with four characters in the style of The Three Stooges.”
These capabilities represent a logical evolution from traditional search: rather than returning links, the system returns completed outputs. Over time, similar principles could be extended beyond text. Image generation, audio synthesis, and even rudimentary video creation may become possible once these systems mature and datasets expand to include non-textual media.
Investment Opportunities in the Next Decade and a Half
From an investment perspective, the implications of this shift are substantial.
First, software-centric technology giants such as Microsoft, Google, and Amazon are likely to benefit from their control over data, platforms, and distribution. Whether through internal development or partnerships with specialist firms, these companies are well positioned to commercialise early artificial intelligence systems and embed them into enterprise and consumer products.
Second, the hardware sector may experience a powerful, front-loaded expansion. Semiconductor manufacturing is capital-intensive and slow to scale, which means incumbents hold a natural advantage. Nvidia and AMD, along with other established chipmakers, may be among the earliest beneficiaries as demand for specialised compute accelerates.
Beyond technology firms themselves, the physical materials required to support this infrastructure should not be overlooked. Large-scale data centres consume vast quantities of conductive and industrial materials. Metals such as copper, silver, and gold are essential for processors, wiring, and server construction. As compute density increases, so too does the material intensity of digital infrastructure.
Finally, there is a parallel trend worth monitoring. Should electric vehicles move from experimental prototypes into mass production over the next decade (as many forecasts suggest) demand for battery materials is likely to surge. Lithium, cobalt, and aluminium may experience significant early demand as manufacturers scale production and infrastructure adapts.
Existing Hardware: Can It Pivot for the Next Wave of Data Processing?
This should be prefaced with the notion that existing hardware of personal and mainframe computers is simply not powerful enough to achieve real time ‘intelligent computing’; however, in the coming decade, and over, it will be, presuming the law of ‘Moore’ is maintained (or exceeded), as we push silicon chip and technology to the limit.
Today’s computers are built from a number of distinct hardware components, each designed to handle particular functions. At the heart of every machine is the central processing unit (CPU), the primary engine that executes instructions for the operating system and all running applications. Around the CPU are supporting subsystems that enable specific capabilities, networking chips for internet access, audio hardware for sound, and graphics hardware for visual output, but these pieces differ widely in their complexity and computational role.
In a typical personal computer, the CPU and motherboard are always active. The CPU continuously processes instructions, manages system resources, and orchestrates the flow of data. Other components, such as the audio controller, remain idle unless their specific function (e.g., playing sound) is needed, and are comparatively simple from a computational perspective.
Another important component is the graphics processing unit (GPU), found either on a dedicated video card or integrated into the system. Although it is often thought of primarily as hardware for rendering visuals and video games, the GPU’s architecture makes it fundamentally different from a CPU. GPUs contain many more arithmetic units than a standard CPU core and are designed to rapidly perform a large number of simple mathematical operations in parallel. They achieve this by breaking a problem down into many smaller tasks that can be processed simultaneously, a capability that proved especially useful for graphics rendering and may also be leveraged for more general computation in the future.
In contrast, CPUs generally have fewer cores optimized for sequential, general-purpose work. They excel at managing diverse tasks and keeping the operating system and user applications running smoothly, but they are not inherently built for the vast numbers of identical calculations that emerging “smart” algorithms may demand. In practice, a CPU must juggle many responsibilities: running the operating system, managing memory, servicing input/output devices, and executing application logic. While CPUs can be used today to process large datasets or perform complex mathematical calculations, doing so often competes for resources with the other tasks a general-purpose machine must handle.
Moreover, CPUs share the main system memory (RAM) with the rest of the computer. This means that every intensive computation competes for the same memory pool that the operating system and applications need. GPUs, by contrast, often have their own onboard memory (video RAM) that is dedicated to the graphics processor alone. This dedicated memory allows GPUs to handle large blocks of data without placing additional strain on the system’s general-purpose RAM, which can make them more efficient for specific types of parallel workloads that involve repeated arithmetic operations.
Because of this difference in design, graphics hardware is increasingly seen not just as a subsystem for games and video, but as potential compute hardware for math-intensive tasks once the right software exists to take advantage of it. In technical circles this concept is referred to as general-purpose computing on GPUs (GPGPU), using the same hardware that accelerates graphics to accelerate other forms of data processing by exploiting the GPU’s strength at parallel math tasks.
If future “smart programs” or early artificial intelligence systems require vast numbers of mathematical operations on large datasets, it stands to reason that leveraging GPU-style parallel computation could be far more efficient than relying solely on traditional CPUs. From a cost perspective, repurposing existing GPU hardware (or building on its architectural principles) may prove more practical in the near term than designing entirely new custom processors. While CPUs will continue to serve as the backbone of general-purpose computing, GPUs and other parallel processors may become the preferred engine for the heavy mathematics that next-generation data-driven software demands.
Looking Forward
Artificial intelligence, as discussed today, remains largely theoretical. Yet the economic logic behind it is compelling. Data continues to grow exponentially. Human analysis does not scale. Machines, given sufficient power, might.
If hardware capabilities continue their historic trajectory, and if software can evolve to exploit that power, artificial intelligence may become one of the defining technological forces of the early 21st century. Those who understand the importance of compute, materials, and infrastructure (not just software) may be best positioned to navigate the next great digital transformation.