On December 11, Jimmy Carr sat on the TRIGGERnometry podcast and delivered a riff that sounded like Peter Thiel’s stagnation thesis filtered through a comedian’s timing:
Minus the screens from any room, we’re living in the 1970s. Nothing’s happened in physics since ‘72. String theory has not got us anywhere. But if you take the compute power of AI and point it at physics, what happens? We could have a world of plenty. I hope that’s the world we live in. But it could go another way.
Two months later, on February 13, GPT-5.2 derived and formally proved a new result in theoretical physics: single-minus gluon scattering amplitudes, long assumed to vanish, are nonzero in the half-collinear regime. Nima Arkani-Hamed at the Institute for Advanced Study called the formulas “strikingly simple” after fifteen years of personal curiosity about the problem. Nathaniel Craig at UC Santa Barbara called it “journal-level research advancing the frontiers of theoretical physics.”
Thiel’s stagnation case
Carr was paraphrasing Thiel, who has been making this argument for fifteen years. The Founders Fund manifesto (2011) put it bluntly: “We wanted flying cars, instead we got 140 characters.” Thiel’s framework distinguishes progress in bits from progress in atoms: spectacular digital gains since 1970, physical-world stagnation. Tyler Cowen named the broader phenomenon the Great Stagnation. On the Douthat podcast Thiel was more measured: “The claim was that the velocity had slowed, it wasn’t zero.”
The data supports the velocity claim. Total factor productivity growth, the metric that captures genuine scientific progress and technological improvement, ran at roughly 1.7% annually from 1947 to 1973. Since 2004, it has averaged 0.4%. Robert Gordon’s The Rise and Fall of American Growth argues the “special century” of 1870 to 1970 was a one-time event. Bloom, Jones, Van Reenen, and Webb showed in the American Economic Review that maintaining Moore’s Law required 18x more researchers in 2014 versus 1971.

The Standard Model of particle physics was essentially complete by the early 1970s. Since then, we have confirmed things we already predicted: the Higgs boson (2012, 48 years after prediction), gravitational waves (2015, 99 years after Einstein), the accelerating expansion of the universe (1998). Important experimental work. But confirmations, not revolutions. No supersymmetric particles. No extra dimensions. No new fundamental energy sources. No unified field theory. String theory, the leading candidate for physics beyond the Standard Model, has produced zero experimentally confirmed predictions in 55 years and admits roughly 10^500 possible solutions, which is another way of saying it predicts everything and therefore nothing. Sabine Hossenfelder captured the frustration:
Theoretical physicists used to explain what was observed. Now they try to explain why they can’t explain what was not observed.
What AI has already done for science
AlphaFold predicted the three-dimensional structures of 214 million proteins, solving the protein folding problem for structural biology. It won the 2024 Nobel Prize in Chemistry for Demis Hassabis and John Jumper, and has been used by over 2 million researchers in 190 countries. DeepMind’s GNoME identified 2.2 million new crystal structures and 381,000 predicted-stable materials, equivalent to roughly 800 years of prior human discovery in materials science. Lawrence Berkeley Lab’s A-Lab robotically synthesized 41 of these in 17 days.
In fusion, DeepMind trained a reinforcement learning system to autonomously control plasma in a real tokamak at EPFL, sculpting it into configurations no human operator had achieved. Princeton researchers predicted tearing instabilities 300 milliseconds in advance and adjusted reactor parameters in real time: the first demonstration of preventing, not just suppressing, the instabilities that have plagued fusion for decades. TAE Technologies used AI-optimized beam injection to sustain plasma above 70 million degrees C. At Lawrence Livermore, the CogSim AI framework predicted a 74% probability of ignition days before the December 2022 shot that achieved it.
Microsoft and Pacific Northwest National Lab screened 32.6 million inorganic materials in roughly 80 hours, identified 18 finalists, and produced a working battery prototype using 70% less lithium within nine months. In drug discovery, at least 75 AI-discovered drugs have entered clinical trials, up from 3 in 2016, with Phase I success rates of 80 to 90% compared to the traditional 40%.
And then, GPT-5.2 produced a new result in theoretical physics. A proof that human physicists had not found. The mathematical reasoning timeline tells the story. AlphaGeometry solved 25 of 30 Olympiad geometry problems in January 2024. By July 2024, AlphaProof earned a silver medal at the International Mathematical Olympiad. By 2025, Gemini Deep Think scored gold: 5 of 6 problems, 35 points, end-to-end in natural language. Terence Tao revised his prediction for superhuman AI mathematics from 2029 to 2026.
75:1 compute gap
Here is the number that matters. Big Tech spent over $250 billion on AI infrastructure in 2024 and 2025. Total US federal AI R&D spending: $3.3 billion per year. That is a compute divide of roughly 75:1 between commercial and scientific AI investment. The NAIRR pilot allocated about 3.2 yottaFLOPs to academic researchers, enough to train GPT-3.5 once but not enough for a single GPT-4-class run.

The DOE’s Genesis Mission announced $320 million in December 2025. That is less than what Meta spends on AI infrastructure in a week. The FASST initiative authorized $2.4 billion per year for five years, $12 billion total, but congressional appropriations are still pending. The US has three exascale supercomputers at national labs. These serve all of science, not just AI.
If AI has already produced results in theoretical physics, materials science, fusion energy, and drug discovery with what amounts to scraps from the commercial table, what happens when someone makes a serious allocation? Hassabis told Fortune in February 2026 that in 10 to 15 years “we’ll be in a kind of new golden era of discovery, a kind of new renaissance.” He described a vision of “radical abundance” where AI has “successfully bottled the scientific method.”
Goldman Sachs estimates generative AI could raise global GDP by 7%, roughly $7 trillion. McKinsey pegs R&D-specific value at $360 to $560 billion annually, but explicitly noted they did not attempt to estimate
the value of truly breakthrough innovations that transform markets (if, for example, nuclear fusion was to enable limitless, clean electricity production).
The bear case: pattern matching is not physics
The bear case is simple and serious. AI is the best pattern-matching system ever built. Physics does not advance by pattern matching. It advances by conceptual revolution: Riemannian geometry for general relativity, an entirely new mathematical framework for quantum mechanics, gauge theory for the Standard Model. None of these were discoverable in existing data.
Noam Chomsky argued in the New York Times that AI’s deepest flaw “is the absence of the most critical capacity of any intelligence: to say not only what is the case … but also what is not the case and what could and could not be the case.” A commenter on Peter Woit’s blog at Columbia spent “over 100 hours probing these models” on open problems and found they “basically never try to come up with something new” when the answer is not already in the training data.
Dario Amodei was notably careful in “Machines of Loving Grace.” He predicted AI could compress 50 to 100 years of biological progress into 5 to 10 years, but on physics he hedged: particle physicists are “limited by data from particle accelerators” and “it’s not clear that they would do drastically better if they were superintelligent.” Some problems are not compute-limited. They are experiment-limited, or concept-limited, or both.
Stephen Wolfram’s principle of computational irreducibility poses the hardest theoretical limit: some systems cannot be predicted by any shortcut. The only way to know what they do is to run them. If fundamental physics contains computationally irreducible problems, no amount of AI compute will crack them.
But Mario Krenn at Max Planck offers a counterpoint from the lab bench. His team published in Physical Review X on AI-discovered gravitational wave detector designs that outperform human designs, and in Science Advances on an AI-discovered violation of Bell inequality with unentangled photons. He does not claim AI understands physics. He claims it finds things physicists miss: “I let the algorithm run, and within a few hours it found exactly the solution that we as human scientists couldn’t find for many weeks.”

Two roads
The nuclear parallel is the one that matters. Fission was discovered in Berlin in December 1938. Hiroshima was August 1945. Seven years from pure physics to weapon. The first nuclear power plant came nine years later. Oppenheimer captured the dynamic: “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.”
Every AI-accelerated physics breakthrough is inherently dual-use technology. The IAEA reports 35 of 45 private fusion companies expect commercial pilot plants between 2030 and 2035. Commonwealth Fusion Systems has raised roughly $3 billion. China established a state-owned fusion company in July 2025. The fusion market is projected at $430 billion by 2030. The same plasma control AI that keeps a tokamak stable could, in principle, optimize weapons physics.
I don’t know which road we’re on. I’m not sure anyone does. But the velocity of AI scientific discovery, from Olympiad geometry problems to a gold medal at the International Mathematical Olympiad to a result in theoretical physics, all within 25 months, suggests the question will be answered empirically rather than philosophically. And probably sooner than the physicists expect.
The cost of intelligence has fallen roughly 150x in two years. The cost of pointing it at physics is a policy choice, not a technical constraint. The 75:1 compute gap between commercial and scientific AI spending is the number that determines how fast this goes. Whether it should go fast is a different question entirely.