Spooky and powerful, AGI (Artificial General Intelligence), the planet’s newest technology, has divine benefits, as also diabolical applications that can trigger our destruction. This chilling warning comes not from the usual cabal of Luddites, Cassandras and doomsayers, but from Silicon Valley’s most influential tech titans: former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks. Their strategy for a safe world: “Any state’s aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals.”
In their paper Superintelligence Strategy, published on March 5, the tech pioneers urge the US not to develop AI systems with “superhuman” intelligence, popularly called AGI. AGI equals, even beats, human cognitive capabilities, reaching the “point of singularity”—when artificial intelligence irreversibly surpasses human intelligence. The co-authors warn that military or rogue use of AGI can unleash “catastrophe”. Quoting American physicist Herman Kahn, the co-authors say superintelligence strategy requires “thinking about the unthinkable”.
The trio urges the US not to pursue a Manhattan Project-style mandate to develop AGI. The 1940s’ Manhattan Project was a government-backed, top secret, well-funded mission to develop nuclear bombs. The then chief censor Byron Price said the Manhattan Project was “the best-kept secret of the war”. Disclosing project secrets was punishable with 10-year imprisonment and a $10,000 fine (equal to $1,80,000 now).
The Manhattan Project was purportedly an assertion not execution of American power, a deterrent, a means to end World War II.
But the bombs were dropped in Hiroshima and Nagasaki, the war ended, the Japanese endured a catastrophe. The father of the bomb—physicist Robert Oppenheimer—famously quoted from the Bhagavad Gita to describe the explosion, “Now, I am become death, the destroyer of worlds.”
The co-authors warn that an aggressive race by the US to exclusively control super intelligent AI systems could trigger fierce retaliation from China, destabilising not just international relations, but the world. The security scenario of a declining Soviet Union is different from an ascendant China. “A Manhattan Project for AGI assumes that rivals will agree to an enduring imbalance or omnicide [world-wide destruction], rather than move to prevent it,” caution Schmidt, Wang, and Hendrycks.
Is their warning too late? Last year, a US congressional commission proposed a “Manhattan Project-style” programme on superintelligence. Donald Trump announced a $500 billion investment in AI infrastructure, called the “Stargate Project”. He reversed Joe Biden administration’s AI regulations. US Energy Secretary Chris Wright said the US is at “the start of a new Manhattan Project” on AI.
Comparing AI systems to nuclear weapons sounds far-fetched, but the Pentagon already regards AI as a top military advantage, speeding up its “kill chain”. The superintelligence strategy hinges on strict protocols for AGI non-proliferation to rogue actors, boosting the economies and militaries using AI and shifting US focus from “winning the superintelligence race to deterring rivals from creating AGI”. Similar to the nuclear MAD (Mutually Assured Destruction), the co-authors introduce their concept: Mutual Assured AI Malfunction (MAIM).
Rather than waiting for adversaries to weaponise AGI, MAIM prescribes that governments should proactively disable threatening foreign AI projects through aggressive cyberattacks or preventing access to open-source AI models.
The co-authors believe it is wiser to take a defensive approach. But definitions of “threatening” differ between the attacker and attacked. Military warrior spirit prefers offence to defence. Have gun, will kill. Have bomb, will detonate. Lethal weapons sneak out of hidden bunkers and clandestine laboratories. AGI, the “catalyst of ruin”, is also the catalyst of power. Experts predict AGI could arrive ahead of schedule, as early as 2026.
Pratap is an author and journalist.