Investing FAQ

20 most recent of 90 questions from 17 posts about investing

Frequently asked questions about market analysis, valuation frameworks, portfolio allocation, and investment strategy

How fast are AI inference costs declining?

At a median rate of 50x per year for equivalent performance, according to Epoch AI. GPT-4-level performance on PhD-level science questions cost $30 per million input tokens in early 2023 and under $0.10 through open-source alternatives today, roughly a 300-fold reduction in three years.

Read full answer in: AI Models Are the New Rebar

Are open-source AI models as good as proprietary ones?

Nearly. The Stanford HAI 2025 AI Index found the gap shrank from 8 percent to 1.7 percent in a single year. Qwen 3.5-35B matches Claude Sonnet 4.5 on select benchmarks at roughly 3 percent of the cost, and GLM-5 achieves the highest Chatbot Arena Elo of any open-source model.

Read full answer in: AI Models Are the New Rebar

Is OpenAI's $840 billion valuation justified?

At 42x trailing revenue, the valuation requires revenue growth to $200-280 billion by 2030 while expanding margins. But adjusted gross margins fell from 40 to 33 percent in 2025 as inference costs quadrupled, and the company lost $13.5 billion in the first half of 2025 alone.

Read full answer in: AI Models Are the New Rebar

Who benefits from AI model commoditization?

Infrastructure providers like Nvidia and cloud platforms collect rent regardless of which model runs. Application-layer companies embedding AI into domain-specific workflows with proprietary data also benefit. Platforms with massive distribution like Meta and Google deliberately accelerate commoditization to prevent anyone from owning the model layer.

Read full answer in: AI Models Are the New Rebar

What are the switching costs for AI models?

Near zero. The OpenAI API format is the de facto standard supported by virtually every provider. LiteLLM, an open-source gateway with 37,000 GitHub stars, provides a unified interface to over 100 providers through a single configuration change. OpenRouter offers managed access to more than 400 models. The only meaningful lock-in is custom fine-tuned models, which affect a small fraction of deployments.

Read full answer in: AI Models Are the New Rebar

Can OpenAI and Anthropic become profitable?

Both face significant challenges. OpenAI lost $13.5 billion in the first half of 2025, with compute and talent consuming 75 percent of revenue and Microsoft taking another 20 percent through 2032. Anthropic at a $380 billion valuation on $14 billion in run-rate revenue projects positive cash flow around 2027-2028. Both are betting they can simultaneously grow revenue and expand margins in a market where open-source alternatives offer comparable performance at 3-15 percent of the cost.

Read full answer in: AI Models Are the New Rebar

How much is Big Tech spending on AI in 2026?

The Big 4 (Amazon, Alphabet, Meta, and Microsoft) are collectively guiding to $610–665 billion in 2026 capital expenditure, up from approximately $384 billion in 2025. Including Oracle, the figure reaches $660–690 billion. Goldman Sachs projects cumulative 2025–2027 spending at $1.15 trillion, more than double the $477 billion spent over the prior three years combined.

Read full answer in: AI Capex Arms Race: Who Blinks First?

What is happening to Big Tech free cash flow?

It is compressing sharply. Alphabet's free cash flow held at $73 billion in 2025 despite capex nearly doubling, because operating cash flow grew 31.5%. But with 2026 capex guided at $175–185 billion, Pivotal Research projects FCF falling approximately 90% to $8.2 billion. Amazon's FCF is already at $11.2 billion TTM. BofA credit strategists found AI capex will consume 94% of operating cash flow minus dividends and share repurchases for the Big 4 in 2025–2026.

Read full answer in: AI Capex Arms Race: Who Blinks First?

What is the AI capex to revenue ratio?

Rough estimates place direct AI revenue at $40–60 billion in 2025 against AI-specific capex of $290–330 billion (roughly 75% of total capex per CreditSights), yielding a coverage ratio of approximately 0.12–0.20x. Sequoia's David Cahn calculated that the AI ecosystem needs to generate $600 billion in annual revenue to justify current infrastructure spending. By 2026, with perhaps $80–120 billion in AI revenue against $450 billion in AI capex, the ratio may reach 0.18–0.27x, still far below 1x.

Read full answer in: AI Capex Arms Race: Who Blinks First?

What did Dario Amodei say about AI spending risk?

On the Dwarkesh Podcast in February 2026, Anthropic CEO Dario Amodei said: 'If my revenue is not $1 trillion, if it's even $800 billion, there's no force on Earth, there's no hedge on Earth that could stop me from going bankrupt if I buy that much compute.' He warned that being off by a single year on growth forecasts could be fatal: 'What if the country of geniuses comes, but it comes in mid-2028 instead of mid-2027? You go bankrupt.'

Read full answer in: AI Capex Arms Race: Who Blinks First?

How fast are AI inference costs falling?

Epoch AI measured inference cost declines at a median 50x per year, accelerating to approximately 200x per year after January 2024. GPT-3-era processing cost around $20 per million tokens at launch in 2020; by early 2026, models of comparable capability cost roughly $0.07, a roughly 280-fold decline over five years. DeepSeek's R1 model priced API access at roughly $0.65 per million tokens, approximately 95% cheaper than OpenAI's o1 at launch.

Read full answer in: AI Capex Arms Race: Who Blinks First?

How does AI infrastructure spending compare to the telecom bubble?

The scale is comparable: the telecom buildout invested over $500 billion (in 2000 dollars), financed mostly with debt, and by 2001 only 5% of installed fiber-optic capacity was in use. Key differences favor today's hyperscalers: they generate massive internal operating cash flows, whereas telecom builders were heavily debt-financed from the start. Key risks remain: AI hardware obsoletes far faster than fiber, inference cost deflation creates stranded asset risk, and the shift toward debt financing (J.P. Morgan projects $300 billion in investment-grade bonds for AI data centers in 2026 alone) is introducing telecom-era fragility.

Read full answer in: AI Capex Arms Race: Who Blinks First?

Is AI infrastructure spending a bubble?

The parallels to the 1990s telecom bubble are real: AI infrastructure spending as a percentage of GDP already exceeds the dot-com era buildout, J.P. Morgan projects $300 billion in investment-grade bonds for AI data centers in 2026, and the revenue-to-capex coverage ratio sits at roughly 0.15-0.25x. Key differences: hyperscalers fund much of it from operating cash flow rather than pure debt, and GPU utilization rates are currently high. But inference cost deflation of 50-200x per year creates stranded asset risk that fiber never faced, and BofA found AI capex will consume 94% of operating cash flow minus dividends and buybacks for the Big 4 in 2025-2026.

Read full answer in: AI Capex Arms Race: Who Blinks First?

What is the GPU depreciation risk for hyperscalers?

Nvidia now releases new GPU architectures annually, and Jensen Huang said of H100s after Blackwell launched: 'You couldn't give Hoppers away.' Michael Burry estimates hyperscalers will understate depreciation by $176 billion between 2026 and 2028 by using five-to-six-year useful lives for hardware that may be economically obsolete in two to three years. Amazon already reversed course, taking a $920 million write-down in Q4 2024 and shortening server useful lives from six to five years, citing the increased pace of AI-driven technology development.

Read full answer in: AI Capex Arms Race: Who Blinks First?

Why are all major banks bullish on AI?

Every institution covered here (Goldman Sachs, JPMorgan, Morgan Stanley, UBS, Barclays, BofA, HSBC, Citi, Deutsche Bank, Santander) has direct commercial exposure to the AI boom: advisory fees on data centre deals, asset management inflows from AI-themed funds, trading volume from AI volatility, and lending to infrastructure projects. The unanimous bullishness is genuine analysis in some cases, but the incentive to be bullish is overwhelming in all cases. The absence of a single bearish voice from nine institutions with hundreds of billions in AI-related revenue is itself the most important signal in the collection.

Read full answer in: Every Bulge Bracket Bank Agrees on AI

Is AI in a bubble like the dot-com crash?

Banks argue no. Nvidia trades at 25–30x forward earnings versus Cisco's ~140x in 2000, and the Magnificent 6 trade at ~35x versus the TMT peak of ~55x. But a BofA fund manager survey in October 2025 found 54% of global managers believe AI equities are in a bubble. The dot-com PE comparison is reassuring. The market concentration data (top 10 companies at 40% of the S&P 500, the highest in half a century) is alarming. Both are true simultaneously.

Read full answer in: Every Bulge Bracket Bank Agrees on AI

What are second-order AI beneficiaries and why do they matter?

Second-order AI beneficiaries are companies that use AI infrastructure to serve customers, rather than companies that build the infrastructure itself. Morgan Stanley's historical data shows second-order beneficiaries dramatically outperform first-order enablers over long horizons: Walmart (1,622x) vs Ford (23x) in the railroad era; Netflix (519x) vs Cisco (4x) in the internet era. The paradox is that nearly every bank's current investment positioning still favours first-order enablers: Nvidia, ASML, hyperscalers, data centre REITs.

Read full answer in: Every Bulge Bracket Bank Agrees on AI

What is the AI capex productivity gap?

The AI capex productivity gap describes the lag between massive infrastructure investment and measurable productivity gains. Hyperscalers spent over $400 billion on AI capex in 2025. Yet Santander's research shows only ~10% of US companies are productively using AI, and 42% abandoned GenAI projects in 2024. MIT's 2025 GenAI Divide report found 95% of enterprise pilots fail to reach production. The gap is historically normal. Railroads and electricity both required massive upfront investment before productivity arrived, but the timeline and scale of this cycle are uncertain.

Read full answer in: Every Bulge Bracket Bank Agrees on AI

How much are hyperscalers spending on AI in 2026?

Goldman Sachs estimates hyperscalers were spending approximately $800M per day on AI-related capex through 2025, with total hyperscaler capex projected to exceed $500 billion in 2026. UBS reported AI capex grew +67% in 2025. Bank of America, using actual GDP data, found AI capex contributed 1.4–1.5 percentage points to US GDP growth in H1 2025, making it the single largest driver of US economic expansion in that period.

Read full answer in: Every Bulge Bracket Bank Agrees on AI

Which bank AI research report is most worth reading?

Bank of America's 'Economic Shifts in the Age of AI' is the most empirically grounded: every claim is anchored to BLS and BEA data, not projections. Santander's macroeconomic report is the most academically rigorous and most willing to present unflattering adoption statistics. Morgan Stanley's second-order effects report contains the most analytically interesting framework for where value ultimately accrues. Goldman Sachs's 'Powering the AI Era' is the most bullish and the most useful for understanding the infrastructure investment thesis at its strongest.

Read full answer in: Every Bulge Bracket Bank Agrees on AI