Reconciling Enterprise AI Revenue

Editorial cover for the Reconciling Enterprise AI Revenue report: a hand-drawn crimson orthogonal random walk threads down the right side of a white page from an × marker at top to a filled red dot at bottom, mirroring the four-step methodological waterfall from Gartner's $1.478T umbrella down to Menlo's $37B US enterprise GenAI figure

Companion to the full research report, Reconciling Enterprise AI Revenue: A Methodological Crosswalk and Vendor-Level Census, 2025. The PDF carries the 68-vendor primary-source census, the six-tier disclosure framework, the per-step sourced deductions, and the netting of structural double-counts.

Three of the most widely cited enterprise AI revenue figures published in 2025 differ by 40x. A fourth, the vendor run-rate sum, sits between them. None is wrong. The defensible floor that underwrites $690B of hyperscaler capex sits at $63.2-72.5B.

Menlo Ventures put US enterprise generative AI spending at $37B for 2025 in its December 2025 report. The bottom-up sum of disclosed vendor run-rates lands at $100-135B worldwide. IDC sizes the 2025 AI Solutions market at $307B. Gartner forecasts $1.478T for total worldwide AI spending in 2025. The trade press cites all four as “the AI market.” Menlo is not the cautious one and Gartner is not the aggressive one. Each figure is methodologically explicit about what it counts, and each is correct under its own perimeter. They just widely disagree about what enterprise AI revenue is, not how to count it.

Why does this even matter? Hyperscaler 2026 capex guidance now approaches $690B, financed in part by roughly $1.5T of AI-related debt issuance over the coming cycle per sell-side estimates, with JPMorgan, Apollo, Blackstone, and KKR structuring the paper. Against the $63.2B narrow audit-grade revenue floor, capex coverage is 9.2%. The 1990s telecom buildout peaked at roughly 28% coverage (the inverse of 3.5x capex-to-revenue).

Four numbers

Menlo Ventures: $37B. A buyer-side survey of 495 US enterprise IT decision-makers, fielded November 7-25, 2025 in partnership with an independent research firm, asking respondents to identify the 2025 budget line items they tag as generative AI spend. Three explicit scope restrictions: US only, GenAI only, enterprise only. The narrowest published figure.

Vendor run-rate sum: $100-135B. Built bottom-up from a 68-vendor primary-source census (42 public, 26 private), graded against a six-tier disclosure framework. The lower bound applies maximal resale netting and an ARR-to-recognized-revenue haircut; the upper bound is the net-of-silicon, gross-of-resale presentation; midpoint $123B fully-net. The vendor census makes every dollar reproducible.

IDC: $307B. Worldwide AI Solutions, sized by the IDC Worldwide AI Spending Guide from vendor share data and channel surveys. Hardware, services, and software where IDC tracks them as bundled into enterprise AI solutions. No consumer device hardware. Mixes vendor recognition with buyer spend in a single figure.

Gartner: $1.478T. Worldwide AI spending across five buckets per the September 17, 2025 release: AI-enabled devices ($389B), AI services ($283B), AI-optimized servers ($268B), AI software ($298B), and an “other” category ($242B). Counts the full retail value of any product sold with an AI feature. Gartner’s own analyst John Lovelock conceded the device layer is “largely sold along with the product and not specifically selected” by buyers.

Each figure answers a different question. Menlo captures what enterprise buyers consciously buy. The vendor sum captures what suppliers recognize. IDC bundles in services and integration. Gartner counts every product sold with an AI feature inside it.

The four widely-cited 2025 enterprise AI revenue figures nest as concentric circles: Menlo's $37B (US enterprise GenAI) inside the audit-grade $63.2B floor, inside the vendor run-rate sum of $100-135B, inside IDC's $307B bundled solutions figure, inside Gartner's $1.478T umbrella, a 40x spread driven by definitional perimeter, not measurement error

Waterfall: from $1.478T to $37B

Each step below applies the next forecaster’s published perimeter to the prior figure. Disagree with any step and you can substitute your own assumption and rebuild the bridge.

Step 1: $1.478T to $307B. Strip the umbrella. Subtract Gartner’s $389B of AI-enabled devices, where a $1,200 phone with an AI camera filter contributes $1,200 to Gartner and zero to IDC. Subtract roughly $200B of broad AI services Gartner counts but IDC excludes, $268B of standalone server hardware, and the residual $232B of broad AI software and “other” categories outside IDC’s solutions definition. Net deduction of roughly $1.171T. The biggest single line item (the $389B device layer) is the one Gartner’s own analyst said buyers don’t actively select for.

Step 2: $307B to $123B. Strip the channel and services markup. IDC counts the full transaction price including channel partner margins, integration services, deployment labor, and government-channel premiums. The vendor run-rate sum counts only what disclosing vendors recognize on their own P&L. The $184B gap is the markup layer: channel margins ($40-60B), services-firm consulting not recognized by software vendors ($60-80B), geographic pricing premiums ($20-30B), and AI revenue inside Tier-C public vendors who disclose no dollar attribution ($15-25B).

Step 3: $123B to $63.2B (narrow) or ~$72.5B (broad). Strip below-Tier-B disclosure. The $123B midpoint includes Tier C-F sources: private vendor ARR claims (OpenAI’s $25B per Sacra, Anthropic’s $19B per the CEO’s Morgan Stanley TMT remarks, Cursor’s $2B, DeepSeek’s $1.1B estimate). An audit-grade reader who refuses to count anything below Tier B gets $63.2B narrow. The broad definition, which credits the hardware OEM cohort net of silicon overlap (Arista excluded as FY26 raised guidance, consistent with ServiceNow’s $1.5B FY26 ACV target treatment) and CoreWeave’s non-Microsoft revenue, lands near $72.5B. Both numbers are net of the Microsoft-OpenAI $11B resale overlap and the AWS/GCP-Anthropic $7B resale overlap (the latter is this report’s analytical assumption, not a vendor disclosure).

Step 4: $63.2B to $37B. Scope adjustments to Menlo’s frame. Strip non-US hyperscaler AI revenue ($15-20B; Microsoft, AWS, Alibaba, Baidu all have material non-US business at the AI run-rate level). Strip non-GenAI components ($8-12B; the AI SKUs at Palantir, Salesforce, and Workday include substantial predictive and classical ML). Strip the small consumer slice. The cumulative adjustment is roughly $26B and the residual lands near $37B, consistent with Menlo’s published figure. The convergence is partly coincidental; a bottom-up survey will not match a top-down deduction exactly. But the order of magnitude is right and the deductions are transparent.

A four-step waterfall walks from Gartner's $1.478T umbrella down to Menlo's $37B US enterprise GenAI estimate: $1.171T removed for device hardware and broad services, $184B removed for channel and services markup, $60B removed for below-Tier-B disclosure sources, and $26B removed for scope adjustments to Menlo's perimeter, leaving the 40x spread as a sequence of sourced deductions

Audit-grade

The $63.2B narrow floor is what remains when you admit only SEC segment-level filings (Tier A) and executive-attributed earnings-call dollar disclosures (Tier B), then net out the structural double-counts in the stack. It is the maximum defensible figure for any question that needs GAAP-traceable revenue. The ~$72.5B broad floor extends the same discipline to the hardware OEM layer and CoreWeave’s external-to-Microsoft revenue, the two additions that survive no-double-count treatment.

The vendor census finds four Tier A disclosures across 68 vendors: NVIDIA’s Data Center segment at $62.3B in Q4 FY26 (~$249B annualized), AMD’s Data Center segment at $5.8B in Q1 2026 (~$23B), Broadcom’s AI semi disclosure at $8.4B in Q1 FY26 (~$34B), and CoreWeave, publicly listed since March 2025, with FY2025 revenue of ~$5.1B and 100% AI infrastructure pure-play disclosure.

Under any no-double-count treatment, Tier A collapses to a narrow band. Not because no Tier A vendors exist, but because every one of them sits structurally upstream of enterprise spend. The three silicon vendors sell into the cloud and OEM layers. CoreWeave sells 67% of its revenue to Microsoft, which resells it as Azure AI Services. Hardware OEMs (Dell at $36B AI annualized, SuperMicro at $33B AI, HPE at $4.4B, Arista at $3.5B) resell silicon to hyperscalers. Net of every reasonable no-double-count rule, Tier A contribution to enterprise-facing AI revenue lands between $2B and $10B.

A six-tier disclosure framework grades enterprise AI revenue sources from Tier A SEC segment filings (most audit-grade, $311B gross but only $2-10B net of double-counts) through Tier B executive-attributed earnings-call disclosures (the $63B audit-grade floor), to Tier C qualitative mentions, Tier D third-party ARR claims, Tier E private commentary, and Tier F extrapolated estimates (least audit-grade)

CoreWeave, the single most AI-purely-disclosed Tier A vendor on Earth still routes two-thirds of its revenue through a hyperscaler that resells the capacity. The cleaner the disclosure, the more visible the resale problem. The hardware OEM cohort behaves identically: Dell, SuperMicro, HPE, and Arista disclose roughly $77B of AI-attributable revenue annualized, but net of the silicon embedded in their bill of materials (already counted at NVIDIA, AMD, Broadcom), the cohort contributes approximately $10.75B of incremental margin and integration value. That figure sits in the $72.5B broad number and not in the $63.2B narrow figure, because honest analysts disagree whether OEM resale margin belongs in the enterprise-facing perimeter or upstream of it.

The Tier B floor totals approximately $63B after resale netting. Microsoft AI contributes $26B ($37B gross less the $11B Microsoft-OpenAI overlap). AWS AI adds $15B. Palantir AIP, IBM watsonx, Baidu AI Cloud, and Alibaba AI products contribute another $19B between them. The long tail of explicit AI SKU disclosures (Salesforce Agentforce at $800M, Workday AI at $400M, Adobe Firefly at $250M, Zscaler AI at $400M, Box AI at $118M) adds roughly $3B. The hyperscaler-and-defense-software lines at the top matter more than the rest of the long tail combined.

The shape is striking. Four firms (Microsoft, AWS, Palantir, and the China hyperscaler cohort led by Alibaba and Baidu) account for more than 80% of the audit-grade total. The “broad enterprise software AI rollout” story implicit in the trade press, in which Salesforce, ServiceNow, Workday, Adobe, Atlassian, Snowflake, and HubSpot all convert customer bases to AI SKUs, has not yet shown up in disclosed revenue at scale. Either it is happening but bundled inside non-AI SKUs (a Tier C disclosure problem), or it is happening more slowly than the narrative suggests; the framework cannot adjudicate which.

A bull objection: ASC 280 (Segment Reporting) does not require AI segmentation. The absence of an “AI segment” at Salesforce or ServiceNow tells you about disclosure obligations, not underlying demand. For TAM and growth questions, the audit-grade floor is the wrong denominator. But for capex coverage, a GAAP cash-flow question, GAAP-traceable revenue is probably the right denominator.

Which number for which question

Capex sustainability and credit underwriting: use $63.2B narrow (or ~$72.5B broad). Lenders financing $690B+ of disclosed hyperscaler capex want repayment in GAAP operating cash flow, and only audit-grade revenue is GAAP-recognizable on a forward basis with reasonable certainty. The principal-versus-agent question under ASC 606, unsettled across the private model-lab cohort, is the live illustration of why a credit memo citing $25B of OpenAI ARR alongside $37B of Microsoft AI is using two different units of measure.

M&A and private equity valuation: use the $100-135B disclosure-grade range. A buyer evaluating an AI-adjacent target cares less about whether revenue is currently audit-grade than what it will be in two years. The Tier D-E private-vendor claims are the leading indicator of recognized revenue. OpenAI’s $25B and Anthropic’s $19B belong here precisely because the dispute over them is about how to count, not whether to count.

TAM and growth analysis: IDC’s $307B is closer to right. It includes the integration and services labor that AI deployment actually requires (the $60-80B of consulting and deployment work that never lands on a software-vendor P&L). A consultancy sizing the AI services opportunity should anchor here.

“Is AI changing the economy?” Gartner’s $1.478T is the right answer. Willingness-to-pay for AI capability includes the smartphone uplift, the AI-PC uplift, and the AI-attributable services premium. Lovelock is right that buyers don’t actively select the device layer, but willingness-to-pay revealed at the cash register is the relevant signal for measuring economic reorganization, regardless of whether buyers identified the AI as the reason.

What this means for the capex thesis

Combined 2026 hyperscaler capex guidance from Microsoft, Alphabet, Amazon, Meta, and Oracle approaches $690B. Coverage on the $63.2B narrow floor is 9.2%. On the ~$72.5B broad floor, 10.5%. On the $123B reconciled midpoint, 17.8%. On the $150B plausibly-estimated ceiling that includes Tier C undisclosed AI revenue at Google Cloud, Meta, Oracle, and the SaaS Tier C cohort, 21.7%.

The 1990s telecom buildout peaked at roughly 28% capex coverage (3.5x capex-to-revenue), the closest analogue and the one Pozsar-era infrastructure bears cite. The current narrow ratio is roughly one-third of that prior peak. Even at the reconciled midpoint, coverage is worse than telecom 1999.

Side-by-side comparison: the 1990s telecom buildout peaked at 28 percent capex coverage (3.5x capex-to-revenue), while AI 2026 sits at 9.2 percent coverage (10.9x capex-to-revenue) against the audit-grade $63.2B floor, making the current ratio roughly three times worse than the closest prior infrastructure analogue

Two bull-side adjustments are honest. (1) capex builds capacity that monetizes on an 18-24 month lag, so the fairer comparison runs 2028 revenue against 2026 capex. At current hyperscaler growth (100-170% YoY for Microsoft and AWS, doubling every 9-12 months for model APIs), $63.2B can plausibly reach $250B by 2028 if growth holds. The narrow floor must roughly quadruple in 30 months for standard 5-6 year amortization math to work at hyperscale gross margins.

(2) $690B is not all AI-incremental. Some portion covers baseline cloud refresh, networking, and non-AI workload growth. A reasonable AI-incremental estimate is $400-500B, which lifts narrow coverage from 9.2% to 12.6-15.8% and broad coverage from 10.5% to 14.5-18.1%. Closer to telecom 1999, but still below it. The midpoint analyst case (using $450B AI-incremental capex against $63.2B narrow) is 14.0%.

The finding survives both adjustments. Even after granting the revenue-lag concession and the denominator-deflation point, coverage is worse than the closest prior infrastructure analogue. The capital structure behind the build (JPMorgan, Apollo, Blackstone, KKR financing approximately $1.5T of AI-related debt per sell-side estimates) is taking duration risk on a coverage ratio that cannot be supported without revenue more than quadrupling in 30 months.

Three existing theses on this site mark-to-market differently under the reconciled range.

The SaaSpocalypse Paradox sharpens. Section 5 of the research report places disclosed application-layer AI revenue at $21.2B ($3.7B AI-native private plus $17.5B of AI SKUs at incumbent public software). Against a $400B+ underlying SaaS market, the layer supposedly displacing the pricing structure is between 1% and 5% of it. Meanwhile capex is $690B. Either AI consumes SaaS rapidly (revenue migration accelerates from 5% to 20-30% share inside two to three years) or capex is mis-sized to revenue. Both can’t be cheap.

The AI Capex Arms Race thesis intensifies. The original essay anchored on a $100B floating revenue estimate. The audit-grade build replaces that with the $63.2B narrow / ~$72.5B broad pair plus the $123B midpoint. Coverage on the narrow basis is 9.2%, worse than the implicit 14-15% the prior essay used. The bear case is stronger after the methodology upgrade.

AI Models Are the New Rebar produces a more interesting intermediate finding. The model-API layer at $46.6B gross ($39.6B net of AWS/GCP-Anthropic resale) is roughly 1.9x larger than the application layer at $21.2B. Margin has not migrated upstack on dollar revenue. Pricing commoditized hard from Feb 2023 to Oct 2025 (flagship input fell ~96%, six-month half-life); the dollar layer grew because volume outran price decline. That trend broke in late 2025: April 2026 GPT-5.4 at $2.50/$15 input/output doubled GPT-5’s input price; Claude Haiku 3.5 is ~3x more expensive than Haiku 3. Higher prices help vendor revenue catch up to capex, but only if buyers absorb rather than substituting to open-weight models.

Within the disclosure-grade $100-135B range, OpenAI and Anthropic together represent roughly 87% of private-vendor disclosed AI revenue. The unsettled principal-versus-agent question at the model-lab layer is not at the long tail; it sits at the two firms whose run-rates carry essentially all of the private layer. If recognition methodology is later clarified across the cohort, the disclosure-grade range moves meaningfully without any change in audit-grade Tier B.

The disclosure tier is also a leading indicator of audit-grade revenue, not a lagging one. When Google Cloud moves from “AI is a growth driver” (Tier C) to a dollar figure on an earnings call (Tier B), the underlying economics don’t change. What changes is the floor lenders and equity analysts can underwrite. If Google, Meta, Oracle, SAP, Snowflake, and Atlassian migrate their AI revenue from Tier C to Tier B over the next four quarters, the $63.2B floor moves toward $90-100B with zero change in operating reality.

Spread Index

One ratio summarizes the disclosure gap: audit-grade enterprise AI revenue over Gartner’s umbrella figure. At v1.0 (May 2026), $63.2B / $1.478T = 4.28% (narrow) and $72.5B / $1.478T = 4.90% (broad).

I will try to update this quarterly with each 10-Q cycle. It tracks the gap between what AI vendors recognize under SEC and earnings-call discipline and what the broadest published framing counts as “AI spending.” If audit-grade revenue compounds faster than the umbrella, capex coverage improves on its own. If slower (vendors bundling AI inside non-AI SKUs, or the next wave of disclosure going only to private ARR claims), the gap widens and the underwriting case deteriorates with no change in the headline numbers.

Four nested circles

The finding the discourse keeps approaching sideways: does AI revenue justify the build? Against $690B of 2026 hyperscaler capex, the right denominator is $63.2B narrow or ~$72.5B broad. Gartner’s $1.478T is correct for measuring AI economic activity, wrong for capex coverage. The vendor sum at $100-135B is correct for TAM and M&A, wrong for credit underwriting. The audit-grade floor is the line below which any capex underwriting argument cannot defensibly go.

If the Spread Index stays below 5% through Q4 2026, the underwriting case for the current build cannot improve from disclosure alone; revenue itself must compound.

№ 080 13 min AI, Investing