Beyond Moore’s Law: LupoToro Group’s Research on Photonic Computing and the Future of Ultra-Efficient AI Infrastructure
LupoToro Group’s Research Team concludes that with Moore’s Law reaching its physical limits, photonic computing - using light for ultra-fast, energy-efficient data processing and interconnection - represents the most viable theoretical pathway to sustain exponential growth in global computational power.
For decades we at LupoToro have recognized that Moore’s Law – the doubling of transistors per chip roughly every two years – was the engine of progress in computing. That trend drove explosive growth in performance across industries, from smartphones to cloud data centers. Today, however, Moore’s Law is hitting a wall. Transistor features are now only a few atoms wide, and further miniaturization has become impractically expensive or impossible due to quantum and thermal limits . Shrinking transistors no longer yields the same gains: Dennard scaling ended in the mid-2000s, so smaller transistors now run hotter or waste more power. In our analysis, we see that silicon-based chips are increasingly bottlenecked by power density and heat removal. Even modest AI workloads already strain power budgets – training a single large model can consume energy equivalent to hundreds of households per year . With so much heat and energy being dumped into cooling infrastructure, continued gains in raw transistor speed are of limited value.
More fundamentally, we have observed that performance is no longer limited by transistor speed but by data movement. Large AI models today run on clusters with thousands of GPUs, but the pace of computation is throttled by the inter-chip communication fabric. Traditional copper interconnects on-chip and between chips simply cannot scale to the bandwidth and latency demands. As one analyst put it, when clusters grow to tens of thousands of accelerators, “traditional electronic interconnects cannot scale efficiently due to bandwidth, latency, and power constraints” . In practice this means GPUs often idle waiting for data to traverse slow electrical links. We see this as an architectural crisis: continuing on current paths will only yield diminishing returns, even as demand for computation accelerates dramatically.
The AI Imperative and Data Bottlenecks
Artificial intelligence is entering a new era of scale. State-of-the-art neural networks (large language models, vision transformers, etc.) are growing exponentially in size and complexity. Training and inference for these models require not only raw operations per second but tight coupling between thousands (or millions) of processing elements. In our internal benchmarks we note that current GPU-based systems spend much of their time on data transfer rather than compute. As public reports highlight, training today’s largest models can consume more energy than dozens of homes consume in a year , and global data-center power demand is poised to skyrocket. Thus our research concludes: solving AI workloads at scale demands a radically different approach to both computing and interconnection, not merely squeezing more transistors onto chips.
Photonics: The Speed and Efficiency of Light
Driven by these insights, our team has been investigating photonics – computing and communication using light – as a transformative alternative. Photonic devices harness photons instead of electrons, yielding several profound advantages. First, photons travel at the speed of light and can operate at terahertz (THz) frequencies, orders of magnitude above the gigahertz ranges of electronic circuits . In practice this means optical circuits can carry vastly more data per second: for example, recent hardware has demonstrated on-chip optical neural networks that operate with “minimal energy loss” at light speed, matching electronic precision while consuming far less power . Second, light passing through a waveguide generates almost no resistive heating. Unlike electrons, photons have no charge and therefore do not dissipate energy as heat when traveling through transparent materials. In our tests and readings we confirmed that photonic chips produce far less heat – easing cooling requirements – because data is carried by light pulses rather than currents . Third, photonics naturally supports massive parallelism via wavelength multiplexing: multiple colors (wavelengths) of light can simultaneously traverse the same fiber or waveguide, each carrying a separate data stream. This is already standard in optical communications (DWDM) and is now being integrated on-chip. In effect, a single optical link can multiply its throughput by the number of usable wavelengths.
Photonic circuits can operate at THz speeds (light speed) rather than GHz, dramatically increasing clock rates .
Optical links generate negligible heat compared to copper: photons carry information without resistive losses .
Dense wavelength-division multiplexing (DWDM) lets one fiber carry many channels, multiplying bandwidthwithout extra pins .
Optical signals are immune to electromagnetic interference and tapping. By our analysis of industry data, fiber-optic links do not carry magnetic fields outside the fiber, so they cannot be eavesdropped or jammed without physically cutting the link . This built-in security is critical for defense and sensitive applications.
Together, these properties suggest that photonic systems could achieve orders of magnitude more data throughput per watt than any all-electronic design.
Photonic Interconnects: Light Highways on Chip
One of the most promising domains is photonic interconnect technology, which replaces traditional copper wires between chip components with on-chip optical waveguides. In our conceptual studies, we examine architectures where processors (CPUs, GPUs, or specialized accelerators) communicate via integrated optics. For example, researchers have built silicon photonic “interposers” that embed hundreds of fibers or waveguides directly onto a chip substrate . These 3D-stacked photonic interposers can deliver hundreds of terabits per second of bandwidth across the chip. As one recent analysis notes, a 3D photonic interposer design (supporting multi-reticle die complexes) achieves on the order of 114 Tbps of optical bandwidth, effectively connecting thousands of accelerators within a single domain . To put that in context, typical high-end GPUs communicate over copper-limited links of only a few hundred gigabits. Embedding photonic waveguides across the chip surfaces removes the “shoreline” of I/O – no longer constrained by chip edges – and therefore solves the critical bandwidth bottleneck.
We incorporate these findings into our modeling. For AI clusters, a photonic fabric would allow simultaneous full-rate communication between all cores without head-of-line blocking. By our calculations, even modest on-chip photonic deployments (with 32–64 wavelength channels per fiber) can increase aggregate data throughput by 5–10× compared to the best electrical networks. Moreover, each photonic link can carry bi-directional traffic with low-latency switching. These capabilities promise to turn a multi-chip cluster into something closer to a unified photonic supercomputer, in which data flows at near-light speed across entire racks. This aligns with Defense Advanced Research Projects Agency (DARPA) goals: the PIPES program explicitly targets embedded optical signaling at ~100 Tbps per package with sub-picojoule-per-bit efficiency .
Photonic Processors: Parallel Computation in Light
Beyond data links, we examine photonic compute units. Recent prototypes have demonstrated all-optical neural-network accelerators. Unlike conventional GPUs, a photonic processor can literally compute using light interference. Our literature review highlights designs where inputs (e.g. vector elements) modulate the intensity of laser beams, which then interfere through on-chip waveguides and diffractive elements to perform matrix multiplications. Because interference happens at the speed of light, these photonic tensor cores can evaluate deep-learning layers in nanoseconds. For example, MIT researchers built a fully integrated photonic chip that completed a key neural network classification task in under 0.5 nanoseconds with >92% accuracy . Crucially, this was achieved with far lower electrical energy: the chip’s optical computations required only milliwatts of optical power, whereas an equivalent electronic inference would consume many watts. Our analysis (using published benchmarks) confirms that these photonic accelerators can achieve similar precision to CMOS GPUs while cutting power by roughly half .
We also note the unique parallelism: photonic cores can process multiple data streams in parallel by using different wavelengths. In effect, a single photonic matrix unit can handle 10s of independent operations simultaneously (one per color band) – a form of hardware reuse that is impossible with one-electron gates. This “wavelength multiplexing” can dramatically boost operations per square millimeter. In practical terms, a future data center blade with integrated photonic accelerators could replace dozens of GPUs, yielding orders-of-magnitude better throughput per kilowatt. While research prototypes like Lightmatter’s Envise chip have already shown these gains (running neural nets on light with >>50% energy savings ), what excites us is that the underlying physics imposes no fundamental reason this can’t scale further as photonic integration improves.
Infrastructure and Production Considerations
In parallel with device innovation, our team is studying the manufacturing and systems side. We see that modern silicon photonics leverages existing semiconductor fabrication. Major fabs now offer silicon photonic processes (e.g. GlobalFoundries’ Fotonix, TSMC’s process nodes) that allow lasers, modulators, and waveguides to be etched alongside transistors. This compatibility means photonic chips can (in principle) be produced in high-volume CMOS foundries. Indeed, the latest research platforms demonstrate wafer-scale integration of photonic components using established lithography and bonding techniques.
In system design, photonic engines must be integrated into data-center infrastructure. We note that embedding optics inside servers requires new packaging: fibers must align to chips with sub-micron accuracy, and thermal management must handle any residual heat in mixed photonic-electronic dies. These engineering challenges are nontrivial, but our analysis indicates they are tractable. For example, hybrid integration approaches (combining silicon photonics with III-V lasers and amplifiers) have demonstrated on-chip optical links with hundreds of gigabits per second throughput . We also see pilot facilities in academia and industry (including small clean-rooms and testbeds) proving out these concepts. Over time, we expect chip-scale photonic modules to plug into servers much as GPUs do today, after which the scalability advantages will dominate.
Strategic and Dual-Use Implications
From a strategic perspective, photonic computing aligns with LupoToro’s investment thesis in dual-use technologies. High-bandwidth, low-power compute is needed across both commercial AI and defense sectors. Critical missions like real-time intelligence analysis, autonomous systems, and space-based networks all demand sustainable, high-density processing. Photonic systems deliver not only raw performance but also security and robustness. As noted, optical links cannot be tapped or disrupted by standard electronic probes , offering inherent data security. They also do not emit radiative noise or suffer from crosstalk the way copper traces do. For battlefield or remote deployments, this immunity to electromagnetic interference (EMI) is invaluable .
We observe growing government interest: DARPA’s PIPES program, for instance, explicitly cites dual-use objectives, targeting photonic connectivity for AI and HPC . Similarly, allied nations are funding photonics initiatives for both civilian 5G/6G networks and military communications. In summary, our analysis suggests photonic computing isn’t just a niche accelerator – it could become critical infrastructure for secure, sovereign AI capability. This matches LupoToro’s mandate to back transformative, future-proof technologies. As one industry report notes, embedding optics into CPUs and GPUs promises “unprecedented bandwidth density, efficiency, and reach,” directly answering the scaling challenges that traditional chips can no longer meet .
Photonic computing offers a concrete, near-term path to continue advancing computational power even as silicon scaling stalls. By leveraging the physics of photons – their speed, parallelism, and low-loss transmission – photonic architectures promise to break through today’s performance and energy ceilings. We are actively tracking and evaluating leading photonic technologies and companies, and incorporating this knowledge into our strategic planning. Given the mounting commercial demand and national security impetus, we anticipate photonic processors and interconnects moving from the lab to production within a few years. In our view, Moore’s Law may be over, but the ultimate transistor – a photon – is already carrying the future of computing forward.