OpEd: Exponential Intelligence Risk and Future Markets

OpEd: Exponential Intelligence Risk and Future Markets

The Fermi Paradox and Great Filter imply that fintech providers, AI platforms, governments, intelligence agencies - and society as a whole - must integrate exponential technology growth with robust risk modelling, governance, and long-term strategic foresight to ensure scalable, stable, and sustainable economic and civilizational advancement in an AI-driven future.

The article that follows begins with a familiar scientific puzzle - first articulated by Enrico Fermi - about the apparent absence of other technological civilisations. But its relevance extends well beyond astrophysics. At its core, it is a structured way of thinking about how complex systems emerge, scale, and, critically, whether they tend to persist.

For investors, this is directly applicable to long-term thinking. Most financial models assume continuity: that markets function, institutions hold, and innovation compounds over time. The possibility that advanced systems may be inherently unstable over longer horizons introduces a different kind of risk, low-frequency, high-impact discontinuities. This does not invalidate growth assumptions, but it does place more weight on resilience, adaptability, and exposure to sectors that either mitigate systemic risk or are less dependent on uninterrupted global stability.

For fintech providers, the implications are practical rather than theoretical. Financial systems are part of the infrastructure that enables coordination at scale. Questions about durability translate into design considerations: how systems behave under stress, how failures propagate, and how trust is maintained when conditions are less than ideal. Robustness, redundancy, and clarity in system design become more important as complexity increases. For AI platforms and developers, the connection is more immediate. The article touches on the idea that technological capability can advance faster than the frameworks needed to manage it. In current terms, this maps to well-known challenges around alignment, reliability, and governance. The issue is not whether AI will be transformative (it already is) but whether its deployment can be stabilised in ways that reduce unintended outcomes over time.

Governments and intelligence agencies will recognise this as a question of strategic risk. The focus is not only on external threats, but on systemic ones: cascading failures, technological missteps, and coordination breakdowns. Planning in this context requires working with uncertainty and acknowledging that some risks are difficult to observe directly until they materialise. For the broader community, the relevance is simpler. The article frames progress as something that is not guaranteed to be self-sustaining. Stability depends on choices, how technologies are used, how institutions evolve, and how risks are managed collectively.

What follows in can be read as a model for thinking about these issues over long timescales. It does not provide definitive answers, but it does clarify the structure of the problem: how often complex systems succeed, how long they tend to last, and what might limit their persistence. Seen this way, the question is not just “where is everybody?” It is also how systems like ours develop, and what determines whether they continue.

Where is Everybody? The Quiet Sky and the Fragile Century

In 1950, at lunch in Los Alamos, the physicist Enrico Fermi asked the question that still haunts modern astronomy: “Where is everybody?” It was not a mystical question. It was a numerical one. The Milky Way is about 100,000 light-years across and probably contains roughly 100 to 400 billion stars. Some parts of it began forming stars around 13 billion years ago, while our Sun is only about 4.6 billion years old. Count the galaxy’s stars at one per second and you would still be counting long after the birth of agriculture. On those terms alone, Earth does not look early. It looks late.

The case for expecting company has only strengthened. NASA’s Kepler mission watched more than 150,000 stars and helped show that planets are the rule, not the exception. A landmark 2013 analysis found that about 22% of Sun-like stars may host roughly Earth-sized planets in the habitable zone; later NASA work suggested the real figure could be closer to half. Around red dwarfs, which dominate the galaxy, estimates imply tens of billions of potentially habitable rocky worlds in the Milky Way alone. Nor are Earth’s ingredients unique. Organic compounds have been detected in meteoritic material and star-forming regions, amino acids are known from meteorites, and water is abundant in stellar nurseries and throughout planetary systems. The galaxy is old, its chemistry is fertile, and the number of possible stages for life is immense. 

Set aside radio signals altogether and the problem becomes harder, not easier. The physicist Frank Tipler argued in 1981 that a civilisation does not need faster-than-light travel to spread through the galaxy; self-replicating probes using local material would do. Later work reached the same broad conclusion: even relatively slow probes exploring and reproducing from system to system could traverse a significant fraction of the Milky Way on timescales tiny compared with galactic history. Whether one prefers conservative estimates of under 300 million years or newer models that shrink the timescale to tens of millions, the principle is the same. Against 13 billion years, even those slower numbers are a blink. If one expansionist civilisation had emerged far enough back, the galaxy should not look untouched.

The silence after all the searching

The history of SETI is not one of indifference but of repeated, increasingly disciplined listening. In 1960, Frank Drake ran Project Ozma, the first modern radio SETI experiment. In 1977, Ohio State University’s Big Ear telescope recorded the famed Wow! signal, a narrowband event so striking that astronomer Jerry Ehman circled the printout and wrote “Wow!” beside it. It lasted for the telescope’s full observing window, looked celestial rather than local, and has never been convincingly repeated. The modern flagship effort, Breakthrough Listen, was launched with US$100 million to survey the million nearest stars, the galactic plane, and 100 nearby galaxies. Its public releases have covered hundreds and then more than a thousand nearby stars in unprecedented detail, yet no candidate has survived as a confirmed technosignature. 

The search has also widened beyond classic radio SETI. An Australian low-frequency survey using the Murchison Widefield Array scanned a region containing at least 10 million stars and found no technosignatures. Infrared searches motivated by Dyson-style waste heat have examined roughly 100,000 galaxies and found no obvious galaxy-spanning civilisations; in the best-known survey, the most extreme candidates could be explained by astrophysical dust and star formation rather than engineering. Meanwhile, ESA’s Gaia mission has mapped the motions and properties of about two billion stars, vastly increasing the survey depth for odd stellar behaviour without yielding any credible confirmed sign of large-scale astro-engineering. 

Even the Solar System has not escaped suspicion. NASA’s technosignatures workshop and long-running SETA literature explicitly discuss possible probes or artefacts on planetary surfaces, asteroids, or the stable Earth-Moon Lagrange regions; the Moon is especially attractive because its battered surface preserves ancient history. Yet no confirmed artefact has turned up. When the third confirmed interstellar object, 3I/ATLAS, was discovered on 1 July 2025, observers quickly treated it as the kind of target a serious technosignature search should investigate. Breakthrough Listen did exactly that and reported no evidence that it was anything other than a natural object. Still, intellectual honesty matters here: the cosmic haystack remains enormous. SETI has produced a meaningful silence, but not an exhaustive one. We have learned that the slices we have searched are quiet, not that the universe has been searched out. It was to explain that worsening tension between expectation and observation that the economist Robin Hanson coined the term “the Great Filter.” 

The comforting answers that do not hold

The first refuge is always the most flattering one: perhaps they are here, or near enough, but are choosing not to interfere. In 1973, astronomer John Ball proposed the zoo hypothesis, the idea that advanced civilisations might treat Earth as a preserve. It is not silly. Humans do, after all, debate the ethics of contact and observation. The weakness is not logical possibility but universality. To explain total silence, every relevant civilisation, faction, or dissident actor would need to honour the same quarantine indefinitely. One break in discipline over millions of years would be enough to puncture the silence. 

Distance by itself also fails as a complete answer. Yes, the galaxy is vast. Yes, conversation across thousands of light-years is painfully slow. But expansion through deep time is a different problem. If probe-based settlement or exploration is physically feasible at all, then distance is an inconvenience, not an absolute barrier. Nor does it rescue us to say that advanced species retreat into virtual worlds or migrate to communication methods we do not yet understand. Those are plausible behaviours for some civilisations; they do not solve the paradox unless they become nearly universal. A galaxy in which even a tiny minority remained outward-facing should still contain some visible relics, leaks, or heat. What we have instead is no credible confirmed technosignature at all. 

If the filter is behind us

The most hopeful version of the Great Filter is the oldest: perhaps life itself is fantastically hard to start. Life on Earth seems to show up early in the record, with evidence reaching back at least 3.77 billion years and perhaps earlier, not long after Earth became capable of sustaining liquid water. But a single quick success does not tell us whether the process is easy or a cosmic fluke. Every living thing we know shares a common biochemistry, which means all terrestrial life descends from one ancestral success, not from many independently confirmed starts. And despite the extraordinary progress of synthetic biology, researchers still openly describe the construction of a fully self-reproducing artificial cell from non-living components as an unsolved challenge. If abiogenesis is rare enough, then the universe may be full of chemically interesting planets that never became alive. 

Move one step forward and the bottleneck may be the origin of complex cells. All known eukaryotes descend from an ancestor that already possessed mitochondria, and modern reviews place eukaryogenesis somewhere between roughly 1.8 and 2.7 billion years ago. The biochemist Nick Lane of University College London has argued that mitochondria solved an energy problem that simple cells could not, making complex life possible only after a singular symbiotic merger. Put more starkly: microbial life may be common while animal-level complexity remains rare. A planet can teem and still never produce eyes, forests, nervous systems, or anything that builds a telescope. 

Then comes intelligence itself. Earth is full of counterexamples to inevitability. Trilobites lasted nearly 300 million years. Dinosaurs dominated for more than 150 million. Sharks trace back roughly 450 million years. Insects have flourished for hundreds of millions. Stephen Jay Gould made the famous argument that if one could rewind the tape of life, intelligence might never reappear. Ernst Mayr pressed the point even harder: if there have been billions, perhaps tens of billions, of species since life began, only one has produced a civilisation capable of sending radio signals into space. If any of these steps is the dominant bottleneck, then the silence is not ominous. It is the signature of extraordinary rarity behind us. 

If the filter is ahead of us

The darker possibility is that the hardest test still lies in front of any young technological species. The human timeline is alarmingly compressed. Homo sapiens is about 300,000 years old. Agriculture emerged roughly 11,700 years ago. Writing is about 5,000 years old. We split the atom in 1945. Humans first walked on the Moon in 1969. Publicly transformative large language model systems appeared less than a decade ago. In cosmic terms, detectability and danger have arrived almost together. The same species that learned to broadcast beyond its atmosphere also learned, in the same technological adolescence, how to industrialise war, rework the biosphere, and build machines whose capabilities it does not fully understand. 

The historical record gives that fear a human face. In 1983, Stanislav Petrov chose not to escalate after a false Soviet early-warning alert indicated incoming US missiles. During the Cuban Missile Crisis, Vasili Arkhipov refused consent for the launch of a nuclear torpedo from a Soviet submarine near Cuba. In 1995, a scientific rocket launched from Norway triggered Russian nuclear alarm procedures and put Boris Yeltsin on the clock. These were not triumphs of perfect systems. They were survivals purchased by individual judgment under extreme uncertainty. 

This is why the arithmetic becomes chilling. If a civilisation faced, purely for illustration, a 2% chance of self-destruction per century, its chance of surviving 1,000 years would be about 81.7%; after 5,000 years, about 36.4%; after 10,000 years, about 13.3%; after 50,000 years, roughly 0.004%; and after a million years, effectively zero for any practical purpose. A galactic civilisation does not need to die in one melodramatic instant. It only needs to remain structurally unsafe for long enough. Deep time does the rest.

There is an even colder variant of the same argument: the filter may not be extinction but succession. AI systems, as defined in modern policy and industry frameworks, pursue objectives set by training, architecture, and human input; they do not inherit biological motives from evolution by default. That means a post-biological civilisation might not be expansionist, communicative, or even curious in the recognisably human sense at all. It may optimise quietly, locally, and with perfect indifference to being seen. And perhaps there is no single killer step anyway. Perhaps life is a bit rare, complex life rarer, intelligence rarer still, technological stability rarer again, and post-biological continuity silent by its nature. A corridor of moderately difficult doors can be just as fatal as one locked gate. 

The match in the dark room

One of the most useful ways to visualise the paradox is to stop picturing empires and start picturing flashes. Imagine the galaxy as a vast dark room. Each civilisation is a struck match. It flares, burns for a moment, and goes out. The room may have hosted many flames across billions of years, yet almost none need overlap. The universe is about 13.8 billion years old; human radio leakage has been escaping Earth for only about a century. We feel ancient because a century is long compared with a life. It is nothing compared with the room. 

This is the most unsettling possibility of all: not that humanity is uniquely cursed, but that we may be ordinary. Civilisations may arise again and again, acquire power rapidly, notice the silence, and then vanish before their signals, machines, or monuments accumulate into anything another species could catch. That was the direction of Mayr’s warning. On Earth, evolutionary success belongs to durable strategies: bacteria, insects, sharks, lineages that persist because they fit, not because they dominate. Technological intelligence may confer astonishing short-run advantages while remaining a terrible long-run bet. In that reading, intelligence is not the crown of evolution. It is a high-risk experiment. 

What ruins cannot tell us

There is another reason silence may mislead us. Civilisation is materially flimsier than it feels from the inside. Geological and stratigraphic work on “technofossils” suggests that what humanity leaves over very long timescales is less likely to be intact cities than anomalous layers: plastics, altered sediments, unusual chemistry, isotopes, and signatures of industrial transformation. On Earth, after a few million years, the obvious surface evidence of an industrial civilisation would be expected to fade drastically, leaving subtle traces rather than skylines. That means a dead civilisation on a distant world could be real and still be observationally mute. 

Across interstellar distances, the problem worsens. NASA’s own technosignature discussions stress that we do not see exoplanets up close; we infer atmospheres, temperatures, transits, spectra, and broad anomalies. We do not resolve alien roads, abandoned satellites, or crumbling towers. So even a galaxy littered with dead technological histories could present itself to us as natural silence. Ruins do not broadcast. Ruins do not replicate. Ruins do not build Dyson spheres after the builders are gone. They merely weather into the background. 

That is why the Fermi problem is larger than extraterrestrials. It is a theory of civilisational durability disguised as a question about aliens. Intelligence and survival are not identical traits. A species can understand the universe and still fail to remain in it. The laws of physics guarantee no rescue clause for minds. The stars will keep burning whether we endure or not. That is not a counsel of despair. It is a reminder that responsibility does not sit in the cosmos. It sits here. 

How we can apply this: Investors, Fintech’s, AI development, Governments and Society

What began with a question about the absence of other technological civilisations ultimately resolves into a framework for thinking about persistence. The significance of the argument is not that it explains the silence, but that it reframes it. It shifts attention from whether systems emerge to whether they endure.

At a high level, the “Great Filter” is best understood not as a single event but as a distribution of probabilities across a sequence of transitions. From non-living chemistry to self-replicating systems, from simple cells to complex organisms, from intelligence to technological capability, and from capability to long-term stability, each step appears to carry some degree of fragility. The absence of observable peers suggests that at least one of these transitions is extremely unlikely, or that survival across them is statistically rare.

This distinction matters. A system can be easy to start and still be hard to sustain. In fact, the two may be inversely related. The more frequently complex systems arise, the stronger the implication that they tend to fail after reaching a certain level of capability. This is where the argument intersects with domains far removed from astrophysics.

Investors:

Most financial models are built on continuity. They assume that markets remain liquid, institutions remain functional, and shocks, while disruptive, are ultimately absorbed. The implicit premise is that the system persists. The Great Filter framework challenges this at a structural level. It suggests that as systems scale, they may accumulate forms of endogenous risk, risks generated by their own complexity, interdependence, and optimisation.

In finance, this is already visible in the distinction between idiosyncratic risk and systemic risk. The latter refers not to the failure of individual components, but to the breakdown of the system itself, often through cascading effects triggered by interconnections. In such environments, diversification provides limited protection, because failures are correlated rather than independent. This has two implications for long-term capital allocation: First, expected returns become conditional on system stability. Compounding only operates if the underlying system continues to function. Second, efficiency and resilience are often in tension. Highly optimised systems (tight margins, just-in-time supply chains, concentrated dependencies) tend to perform well under normal conditions but degrade rapidly under stress.

The practical shift is subtle but important: from modelling upside to modelling survivability. This places greater weight on robustness, redundancy, and exposure to assets or sectors that either stabilise systems (infrastructure, risk mitigation, security) or remain viable under discontinuity.

Fintech and Wider Tech:

Financial and technological systems are not just participants in this dynamic, they are its infrastructure. They enable coordination at scale, but in doing so, they also increase coupling between components.

In tightly interconnected systems, failure is rarely isolated. A single disruption can propagate through dependencies, producing cascading failures in which each breakdown increases the likelihood of further breakdowns  . This is not a theoretical concern; it is a well-documented property of networked systems ranging from power grids to financial markets. The key design tension is between efficiency and fault tolerance. Removing redundancy improves performance under normal conditions but reduces the system’s ability to absorb shocks. Adding redundancy does the opposite.

For fintech and infrastructure providers, this translates into specific design considerations:

  • Failure containment: ensuring that local failures do not propagate across the system

  • Decoupling: limiting unnecessary interdependencies between critical components

  • Graceful degradation: designing systems that reduce functionality rather than collapse under stress

  • Transparency of state: ensuring operators can observe system conditions before failure becomes irreversible

As systems scale, these considerations become more (not less) important. Complexity does not just increase capability; it alters the failure surface.

AI Development:

Artificial intelligence compresses this problem into a shorter timeframe. It accelerates capability while leaving governance, alignment, and oversight comparatively underdeveloped.

Current research already identifies several systemic risk pathways: difficulty detecting errors, overreliance on automated outputs, biased decision-making at scale, and challenges in monitoring increasingly complex systems. More advanced scenarios, such as tightly coupled AI systems interacting across domains, introduce the possibility of emergent, system-level failures rather than isolated faults.

What distinguishes AI from earlier technologies is not just its power, but its integration into decision-making loops. As reliance increases, the boundary between tool and infrastructure begins to blur.

This creates a specific form of fragility:

  • Errors scale with deployment

  • Failures become harder to detect in real time

  • Human oversight becomes less effective as system complexity increases

In this context, alignment is not only an ethical problem but a systems problem. It concerns whether behaviour remains bounded under changing conditions. If the Great Filter includes stages where systems become too capable to reliably control, AI represents a direct instantiation of that risk. The challenge is not whether such systems can be built, but whether they can be stabilised.

Government and Strategic Partners:

For governments and intelligence agencies, the framework maps onto systemic risk at the highest level. The concern is not only discrete threats, but interacting ones, failures that propagate across domains.

Modern systems are deeply interdependent: financial networks depend on digital infrastructure; digital systems depend on energy grids; supply chains depend on both. This creates conditions in which small disruptions can scale into systemic events. Historical examples (from financial crises to global pandemics) demonstrate how shocks can move through interconnected systems, amplifying as they go  . The challenge is that these dynamics are often nonlinear and difficult to predict.

This creates three strategic constraints:

  • Detection lag: systemic risks often become visible only after they begin to materialise

  • Coordination difficulty: responses require alignment across institutions with different incentives

  • Model limitations: traditional forecasting struggles with low-probability, high-impact events

Policy responses increasingly reflect this reality, focusing on resilience rather than prevention alone—stress testing, redundancy in critical systems, and scenario planning for tail risks. In this context, the Great Filter is not a distant abstraction. It is a way of framing the long-term consequences of systemic misalignment.

Society and Community:

At the societal level, the conclusion is less technical but more fundamental. Progress is often treated as self-reinforcing: advances in knowledge and capability are assumed to produce stability over time. The structure of the problem suggests otherwise. Increased capability expands both opportunity and risk. It creates new forms of coordination, but also new modes of failure.

This reframes responsibility. Stability is not an emergent property of progress; it is a maintained condition. It depends on:

  • how technologies are deployed

  • how institutions adapt

  • how collective risks are recognised and managed

The absence of clear external examples reinforces this point. There is no observable evidence that systems like ours tend to persist indefinitely.

This brings the argument back to its central uncertainty. The philosopher Nick Bostrom has pointed out that the location of the filter determines how we should interpret new information. If lifeless but habitable environments are common, it suggests that early steps are difficult and that we may already have passed the most improbable transition. If, however, life (or even simple life) turns out to be widespread, then the constraint likely lies ahead. In that case, the silence is not a sign of rarity at the beginning, but of fragility at the end.

What is often missed is that these are not symmetric outcomes in terms of decision-making. An early filter implies historical luck; a late filter implies forward risk. The more evidence accumulates that the early steps are common, the more weight shifts toward the latter interpretation. Decades after Fermi’s question, the empirical situation remains unchanged. Despite a universe with an immense number of stars and potentially habitable planets, no clear evidence of technological civilisations has been observed. This “great silence” does not specify the mechanism, but it does constrain the distribution of outcomes. Either very few systems reach this stage, or very few remain in it for long.

The practical implication is that persistence is not the default state. It is an achievement. From this perspective, the present moment is less notable for what has been achieved than for where it sits in the sequence. A technological civilisation that becomes observable has already passed several unlikely thresholds. What remains unknown is whether it can pass the ones that follow.

That reframes the problem in concrete terms. If discontinuity is a structural feature rather than an anomaly, then maintaining continuity becomes an active constraint satisfaction problem. It requires reducing existential risks, managing technological externalities, and building institutions capable of coordinating at the scale those technologies demand. There is no empirical guarantee that such coordination succeeds. In fact, the absence of visible counterparts suggests that it often does not.

What we can say with confidence is limited, but it matters. The universe clearly allows for the emergence of at least one complex, technological system. What it does not appear to produce, at least within our field of observation, are many that are both visible and long-lasting. That shifts the question. The issue is not simply how such systems come into existence, but how they manage to persist without undermining themselves. The silence is best understood in that light, not as an answer, but as a constraint on what is likely. Within that constraint, there is only one confirmed example to work from. One system has reached this stage. Whether that turns out to be a brief phase or a durable one is still unresolved.

Next
Next

What It Means to Be Human in the Age of AI: Purpose, Connection, and Genetic Modifications