Two Anthropics

Illustrated portrait of Dario Amodei with a translucent head revealing a warmly lit writing room inside, on a cream background

Anthropic was founded to be the safety lab that would pull rivals upward. Five years later it is the most aggressive frontier scaler at $380 billion, the company most likely to build the dangerous thing it warns about.

A personal note first. This post is an outtake from a 14,000-word profile of Dario Amodei I just published. I don’t like Anthropic noticeably more than I dislike the other hyperscalers and AI model providers. Amodei is probably the AI CEO whose language and thinking land closest to mine, so over the last few weeks I worked through a dozen of his interviews, a stack of his essays, and a lot of hours of him on YouTube. The longform is the portrait. This post is the structural argument that fell out of it. If you want the character work, the family backstory, and the scenes a paradox piece can’t carry, read the full thing.

Dario Amodei founded Anthropic in 2021 with six other ex-OpenAI researchers and a thesis with two halves: (1) powerful AI is coming whether or not safety-aligned labs are at the frontier, so safety-aligned labs have to be at the frontier. (2) the load-bearing claim, was that competing on safety would pull rivals upward, a “race to the top.” Both halves are still in the company’s public materials. The first half is doing fine. The second is colliding with what the company has actually become. The Series A raised $124 million in May 2021. The Lightspeed round added $3.5 billion in early 2025. By September 2025 the valuation was $183 billion; by February 2026 it had reached $380 billion. Revenue went from zero to roughly $10 billion annualized in three years, with 10x year-on-year growth in each of them. The character work behind that claim is in a separate longform, Inside the Mind of Dario Amodei. This post is the strategic argument that emerges from it.

Race to the top, on paper

Anthropic registered as a Public Benefit Corporation. Its self-description is that it is “an AI safety lab that is also an AI lab,”. The argument goes: a lab that genuinely cares about safety has to be commercially competitive at the frontier, because otherwise the frontier is set by labs that care less. Being at the frontier lets you publish safety practices, hire the best alignment researchers, and shape policy with credibility. Rivals see your practices working and copy them. The whole industry shifts. Race to the top.

In January 2026, Amodei published The Adolescence of Technology, a roughly 22,000-word essay laying out a five-category risk taxonomy: misalignment, individual misuse, state misuse, economic disruption, and indirect or unknown effects. The essay’s anchor sentence on the geopolitical category reads:

autocracy is simply not a form of government that people can accept in the post-powerful AI age.

This is the language of someone who believes the company’s mission is civilizational, and who is willing to say so under his own name. It is also the language of a company that has positioned itself as the democratic-world frontier lab, which has consequences for who its customers and adversaries become.

The strongest external signal that Anthropic’s safety thesis is not pure positioning came in November 2023. After Sam Altman’s brief firing and reinstatement at OpenAI, the OpenAI board approached Amodei with two offers: take the CEO job, or merge Anthropic into OpenAI. He declined both. Walking away from the CEO chair at the most-valuable AI company in the world, less than three years after leaving it, was the most expensive credibility signal Amodei could send that the safety thesis was the actual thesis and not a brand exercise. Roughly fourteen OpenAI researchers had followed him out two years earlier.

There is a softer counterweight worth keeping in mind. Asked by Nicolai Tangen in 2024 about scaling timelines, Amodei said:

frankly, although I invented AI scaling, I don’t know that much about that either. I can’t predict it.

What scaling produced

Five years after that $124 million Series A in May 2021, Anthropic is valued at $380 billion and runs at roughly $10 billion in annualized revenue. Revenue went from zero to $100 million in 2023, $100 million to $1 billion in 2024, then to a $10 billion run-rate by the end of 2025, three consecutive years of 10x growth. Investor decks show projections of $26 billion for 2026 and $70 billion for 2028. The forward trajectory is the steepest in the history of enterprise software.

The capital base behind that revenue is more telling than the revenue itself. Amazon has committed roughly $8 billion cumulatively. Lightspeed wired the first $1 billion of the $3.5 billion early-2025 round on the same Monday Nvidia dropped 17% on the DeepSeek shock, a piece of timing that, deliberately or not, communicated conviction at the moment everyone else flinched. A year later, the $30 billion Series G in February 2026, led by GIC and Coatue, brought the post-money valuation to $380 billion in the second-largest tech funding round on record. The Fluidstack data-center deal was $50 billion. Project Rainier, the Amazon-anchored compute build, was $11 billion. Compute commitments through 2028 total roughly $78 billion. The headcount is ~2,500 employees as of late 2025, up from a few hundred two years earlier.

The customer list reads like a sales-led enterprise software company, because that is what it now is. Pfizer. United Airlines. Novo Nordisk. AIG, which reports an 8x to 10x speed-up on insurance underwriting workflows over an 18-month pilot. These are not safety partnerships. These are revenue contracts in regulated industries that came down to a procurement bake-off. Claude Code, the developer-tier productisation that shipped in February 2025, gave Anthropic a per-seat developer footprint inside enterprises that previously bought through API alone. Anthropic claims internally that it generates 2.1x more revenue per dollar of compute than its largest rival, a figure that should be treated as a claim rather than a verifiable number, but which is consistent with a company optimizing seriously for unit economics.

Anthropic today is a frontier lab competing for every contract that crosses procurement. The structural pivot is that this is no longer a research lab with a corporate appendage. It is a frontier company with a research function, and the research function inherits the constraints of the company, not the other way around. None of this is bad on its own. The question is whether the founding thesis still describes what the organism does.

Why both cannot fully be true

To pull rivals upward you have to stay competitive at the frontier of the thing you call dangerous. The faster you scale, the more your safety claim depends on rivals copying your practices rather than on you slowing down. But rivals’ incentive to copy weakens precisely as you become a credible competitor on revenue and contracts. One empirical hedge worth flagging: OpenAI’s Preparedness Framework and DeepMind’s Frontier Safety Framework share structural features with Anthropic’s Responsible Scaling Policy, and some of the convergence may be parallel development inside large labs facing similar regulatory pressures, not Anthropic-pulled diffusion. The race-to-the-top claim is harder to falsify than it looks.

Amodei himself has publicly acknowledged the tension, telling Fortune that Anthropic struggles to balance its safety mission with commercial pressure. Three forcing functions are the operational shape of that struggle.

(1)) the Department of Defense. On March 26, 2026, a federal judge issued a temporary injunction against the DoD in a dispute that started when Pete Hegseth’s department asked Anthropic to drop the contractual ban on Claude being used for mass domestic surveillance or fully autonomous weapons in democratic countries. Anthropic refused. The DoD then labeled the company a “supply-chain risk.” The judge’s written opinion described the DoD’s actions as “classic First Amendment retaliation.” The point is not that Anthropic was wrong to refuse, it almost certainly was right to refuse, but that at frontier scale a safety constraint becomes a federal court fight, not a research-policy choice.

(2)) the Pottinger op-ed. In January 2025, Amodei co-authored a Wall Street Journal opinion piece titled Trump Can Keep America’s AI Advantage with Matt Pottinger, the former deputy national security advisor. The piece argued for tighter chip-export controls against China. This is policy advocacy from the perspective of a national-security actor, not a research lab. There is a coherent through-line from the safety thesis to chip controls, the argument is that frontier capability in the wrong hands is a category-five risk, but the public posture is different from “we publish safety research and hope rivals copy us.” Anthropic is now a constituency in great-power competition.

(3) the Huang feud. In August 2025, Amodei and Jensen Huang traded public criticisms over export controls. Amodei accused Huang of an “outrageous lie.” He grew visibly emotional discussing his father’s preventable death, a thread that surfaces in nearly every long-form interview he gives and is the emotional spine of the longform profile. The Nvidia CEO is one of two or three people who can directly affect Anthropic’s compute supply. Picking that fight in public is a choice you make as a political actor, not as a research lab. It is also a choice the founding thesis would not have predicted in 2021.

The point is not that any of these are bad. Each is defensible on the merits. The point is that “race to the top” was a 2021 framing for a 2021 company. The 2026 company is a different organism. It has federal-court fights, named geopolitical adversaries, a Senate that voted 99-1 against the kind of ten-year state-AI moratorium Amodei opposed in his June 2025 New York Times op-ed, and a 60% revenue-growth trajectory on a forward base measured in tens of billions. None of those is what a safety lab looks like. All of them are what a frontier company that takes safety seriously looks like.

Three scenarios

(A) The thesis holds. Frontier labs converge on Anthropic-style safety practices, race-to-the-top works as advertised, and Anthropic earns a durable safety-narrative premium. The conditions for this case are an EU AI Act enforced with teeth and a US transparency framework that gives federal cover to the practices Anthropic already publishes. There is some support: the Senate’s 99-1 vote against the proposed ten-year state-AI moratorium suggests the political ground is not hostile to oversight. The reasons to discount it: the Senate vote was a defensive outcome, not a positive endorsement of federal standards, and a US transparency framework is still nowhere despite Amodei’s June 2025 NYT op-ed (Don’t Let A.I. Companies off the Hook). The regulatory tailwind that scenario (A) needs has not materialized at the speed the thesis requires.

(B) The thesis becomes a constraint, not a moat. Most likely. Anthropic loses ground on raw frontier capability to less-constrained competitors, xAI, a more permissive next-generation OpenAI, leading Chinese labs, and the safety stance becomes a self-imposed handicap rather than a market-shaping lever. The 80% wealth pledge from Anthropic’s seven cofounders, disclosed in the January 2026 essay, is a real governance constraint, not a PR move, but it works as a partial offset to wealth-concentration concerns rather than a reversal of the competitive dynamic. The DoD fight, the Pottinger op-ed, and the Huang feud are early evidence that at frontier scale, your safety stance creates adversaries you cannot route around. The pattern accelerated in early 2026, when Time reported that Anthropic dropped its flagship safety pledge in favor of a non-binding framework the company itself said “can and will change,” the kind of operational adjustment scenario (B) predicts. The implication is that the safety premium investors paid in 2021-2023 should compress, because the thing the premium was paying for, a safety lab that pulled rivals upward, is becoming a frontier company that constrains itself relative to less-constrained rivals.

(C) The paradox dissolves because the scale itself ends. AI capex hits a Jevons-paradox-for-labor wall, model commoditization compresses margins, frontier scale becomes uneconomic, and Anthropic returns to looking like a research lab again because every lab does. Implication: the paradox was about a phase, not a company. The reasons to discount this case are that Anthropic’s $26 billion and $70 billion forward revenue projections make a structural retreat harder for them than for capex-heavier hyperscalers, and that a commoditization scenario hits Anthropic’s gross-margin profile later than it hits the labs running on rented compute.

What this means for pricing AI labs

Three observations for an allocator.

(1) safety narrative is not moat at this scale, it is a constraint. Price it accordingly. The premium investors paid in 2021-2023 was for Anthropic against a counterfactual where there was no safety-aligned frontier lab. That counterfactual no longer exists. Anthropic is the frontier lab. The safety stance now binds the company in ways that show up as legal exposure, slower product cadence in some categories, and a narrower set of customers it can serve. None of these alone breaks the company. In aggregate, they change what an investor is paying for.

(2) the signal to watch is whether the rate of frontier-capability spread is faster than the rate of safety-practice diffusion. That ratio decides whether race-to-the-top is happening at all. If safety practices spread faster than capability, the thesis works. If capability spreads faster than safety practices, the thesis is a story the company tells about itself. The current evidence leans toward the latter, but it is the kind of variable that can move with one EU enforcement action or one US framework.

(3) Anthropic-the-company and Anthropic-the-thesis are now two different things. An investor can be long the company and short the thesis. The company has revenue, named customers, governance constraints that work as a partial offset to wealth-concentration risk, and a forward trajectory that is hard to bet against. The thesis is a 2021 framing under increasing structural strain. The longform makes the case that Dario sees this; here we make the case that the market should price it.

Based on twenty-five hours of Dario Amodei’s on-record interviews and six long-form essays. The full reported profile, Inside the Mind of Dario Amodei, runs 14,000 words at the author’s newsletter.

№ 088 12 min AI, Investing