The race to develop artificial superintelligence (ASI) is often compared to the Manhattan Project. That analogy suggests urgency, secrecy, and a decisive leap forward. Yet a recent paper, The Manhattan Trap: Why a Race to Artificial Superintelligence is Self-Defeating, argues that this analogy hides a dangerous truth. Far from ensuring advantage, a race to ASI creates conditions where every participant faces higher risks of failure. The paper highlights how competition fosters mistrust, raises the chance of conflict, and undermines both safety and democratic norms.
Understanding these dynamics is crucial. If policymakers, corporations, and researchers treat ASI purely as a zero-sum contest, they may inadvertently create the very disasters they fear. The challenge is not only technical but institutional: how to structure incentives, rules, and governance systems so that competition does not lead to unsafe shortcuts.
Why the ASI Race Is a Trap
The authors argue that ASI races are not simply about who gets there first. They are about trust, or rather, the lack of it. Each player suspects that delaying for the sake of safety will allow rivals to seize a permanent advantage. This perception fuels secrecy, rapid deployment, and reluctance to allow independent oversight.
The trap lies in the incentives. Even though all participants would benefit from cooperation, the fear of losing out makes restraint nearly impossible. The outcome is a trust dilemma where every side chooses riskier strategies than they otherwise would. This creates three major dangers:
- Conflict escalation: If states interpret ASI development as a path to military dominance, even rumors of progress can trigger defensive or preemptive measures.
- Loss of control: In the rush to deploy, safety mechanisms may be weakened or skipped altogether. That increases the chance of creating systems no one can reliably govern.
- Institutional erosion: Emergency powers, secrecy, or corporate monopolization can corrode democratic oversight and liberal norms.
Building Governance That Reduces Risk
If the Manhattan Trap is primarily about incentives, the solution must involve governance and institutions that make cooperation more rational than reckless competition. Several approaches show promise.
Turning Voluntary Norms into Binding Rules
Recent international summits have produced voluntary safety commitments by major AI labs. These declarations signal willingness to cooperate but remain toothless without enforcement. The next step is converting them into binding agreements.
An international compact on frontier AI, equipped with verification mechanisms, could make restraint credible. Just as arms control treaties balance the risks of defection with the rewards of stability, a treaty regime for ASI would reduce the temptation to rush at all costs. Verification is difficult, but without it, agreements risk becoming empty words.
Learning from Nonproliferation Models
Governance of nuclear and chemical weapons offers useful lessons. The International Atomic Energy Agency and the Organization for the Prohibition of Chemical Weapons combine declarations, inspections, monitoring, and audits to make treaties enforceable.
For AI, researchers have suggested “layered verification” systems. These include hardware-level monitoring of advanced chips, cryptographic proof of training runs, third-party audits, and whistleblower protections. No single method is foolproof, but redundancy increases trust. The lesson from nonproliferation is clear: credibility requires multiple, overlapping safeguards.
Aligning Incentives for Safety
The Manhattan Trap arises because the fastest route appears to be the most rewarding. Governance must flip that logic. States and corporations should see more benefit in safe progress than in reckless acceleration.
Several tools can help:
- Conditional finance: Public funding and market access could be tied to adherence to safety standards verified by independent audits.
- Export controls with cooperation: Restrictions on key technologies can be paired with shared safety research to prevent them from becoming purely punitive.
- Collaborative testbeds: Common evaluation frameworks and supervised testing environments reward those who demonstrate safety leadership instead of cutting corners.
These mechanisms align competitive incentives with safer outcomes.
Designing a Distributed Governance Architecture
Placing control of ASI in a single institution is risky. Instead, governance should be built as a distributed system with checks and balances. A robust architecture might include:
- A UN-backed secretariat dedicated to AI verification.
- Regional safety institutes with technical capacity.
- Accredited independent auditors are empowered to evaluate labs.
- Joint test centers where states and corporations’ trial high-risk systems under shared oversight.
This model reduces capture by any one actor and maintains plural oversight, while still providing the credibility needed to prevent unsafe races.
Political Challenges Ahead
Implementing such governance will not be easy. States will resist binding restrictions on a technology they believe may secure their future dominance. Corporations will resist limits that appear to slow innovation. Yet the logic of the Manhattan Trap is stark: a permanent lead in ASI is an illusion if the race itself destabilizes global security.
The task for policymakers is to make restraint both rational and enforceable. That means taking voluntary norms and embedding them in law. It means investing in technical research on verification now, before ASI capabilities scale further. And it requires building international institutions that make cooperation more rewarding than unilateral risk-taking.
A Path Out of the Trap
The Manhattan Trap reframes the race to ASI as a shared strategic dilemma. The actors involved are not just technologists but governments, institutions, and societies. If they treat ASI as a zero-sum sprint, they risk triggering conflict, losing control, and undermining democracy. If they invest in credible governance, verification, and cooperative incentives, they can turn the race into a managed pathway that rewards responsibility.
The question is whether political leaders and corporate executives can act with the foresight to avoid the trap before the logic of competition pulls them deeper in.
Click here to read this article on Dave’s Demystify Data and AI LinkedIn newsletter.