AI FAQ

20 most recent of 105 questions from 23 posts about ai

Frequently asked questions about artificial intelligence, machine learning, LLMs, and AI infrastructure

Why does AI output converge to the mean?

Three structural forces drive convergence. LLMs generate the most statistically probable next token, which trends toward average output. RLHF training compounds this with a typicality weight of α=0.57, literally training models to produce familiar-sounding responses. And model collapse, documented in Nature by Shumailov et al. (2024), shows models training on AI-generated content lose distributional tails and converge to point estimates with minimal variance.

Read full answer in: The Impossible Backhand

What is the ninth-power scaling curve for AI?

MIT researchers Thompson, Greenewald, Lee, and Manso found computational cost scales with at least the fourth power of improvement in theory and the ninth power in practice. To halve an AI error rate requires more than 500x the computational resources. AlexNet trained on two GPUs in six days in 2012; NASNet-A halved the error rate in 2018 using over 1,000x the compute.

Read full answer in: The Impossible Backhand

How do AI models perform on Humanity's Last Exam in 2026?

As of February 2026, the top-performing AI model (Gemini 3 Pro Preview) scores 37.5% on Humanity's Last Exam, a benchmark of 2,500 expert-crafted questions across 100+ academic domains. Most models score below 30%. Human domain experts average roughly 90%, revealing a 53-point gap. Calibration errors range from 34% to 89%, meaning models are systematically overconfident.

Read full answer in: The Impossible Backhand

What is the centaur model for human-AI collaboration?

The centaur model describes human-AI collaboration where each handles what they do best. The Harvard/BCG study of 758 consultants found AI users completed 12.2% more tasks, 25.1% faster, at 40% higher quality within AI's capability frontier. But on tasks outside AI's frontier, AI users were 19 percentage points less likely to get correct answers. The centaur divides tasks by strengths; blind delegators adopt AI output without interrogation.

Read full answer in: The Impossible Backhand

Does AI replace or augment domain expertise?

The empirical evidence favors augmentation. Oxford researchers found complementary effects of AI on jobs are 1.7x larger than substitution effects. The World Economic Forum projects a net gain of 78 million jobs by 2030. The centaur model, where human experts collaborate with AI, consistently outperforms either alone across finance, consulting, and clinical decision-making.

Read full answer in: The Impossible Backhand

Will AI make human experts obsolete?

No. The Harvard/BCG study, LSU Finance centaur analyst study, and Mayo Clinic clinical experiments all show human-AI collaboration outperforms AI alone. But the quality of the human contribution matters a lot: professionals who blindly trust AI outside its frontier perform worse than those without AI. Domain expertise is the irreducible ingredient that makes the centaur model work.

Read full answer in: The Impossible Backhand

What are AI hallucination rates in professional domains?

Yale researcher Matthew Dahl found AI hallucination rates of 69-88% on specific legal queries. Stanford HAI found even specialized legal AI tools hallucinate 17-34% of the time. Damien Charlotin's database tracks 914 cases of hallucinated content in legal filings worldwide. Medical AI hallucinations are especially dangerous because subtle clinical errors may not raise immediate suspicion.

Read full answer in: The Impossible Backhand

What caused the February 2026 enterprise software sell-off?

The immediate catalyst was Anthropic releasing 11 open-source plugins for Claude Cowork on January 30, 2026, covering Legal, Sales, Marketing, Finance, and other departments. Thomson Reuters fell 16% and LegalZoom fell 20% in a single session. The broader IGV software ETF dropped 32% from its September 2025 peak to a low of $79.65, with roughly $2 trillion in market cap destroyed.

Read full answer in: The SaaSpocalypse Paradox

What is the BofA paradox in the software sell-off?

Bank of America's Vivek Arya identified a logical inconsistency where investors are simultaneously punishing hyperscaler stocks because AI capex might generate weak returns, while destroying software stocks because AI will be so pervasive it replaces all existing software. Both cannot be true. If AI tools are not generating ROI, they are not replacing enterprise software. If they are replacing enterprise software, the hyperscalers are earning extraordinary returns.

Read full answer in: The SaaSpocalypse Paradox

Why does software now trade cheaper than semiconductors?

The Russell 1000 Software subsector trades at 32.4x forward earnings versus 43.6x for Semiconductors, an 11.2x gap that has not persisted historically. Recurring-revenue businesses with 90%+ gross margins and 95%+ renewal rates now carry a lower multiple than cyclical chipmakers with 40-60% margins and concentrated customer bases.

Read full answer in: The SaaSpocalypse Paradox

Which hyperscaler can fund AI capex from operating cash flow?

Only Microsoft generates cash from operations (net of dividends and buybacks) in excess of capital expenditure in FY2026, with roughly $110B in cash versus $105B in capex. Alphabet, Amazon, Meta, and Oracle are all capex-negative, with Oracle showing the widest gap at $20B cash versus $50B capex.

Read full answer in: The SaaSpocalypse Paradox

Is all enterprise software equally at risk from AI disruption?

No. The risk varies widely by category. Deterministic, mission-critical systems like ERP, cybersecurity, and observability face low disruption risk and will likely absorb AI as additive capability. Probabilistic workflow tools like content creation, tier-1 support, and basic analytics face genuine existential risk. The market is pricing the entire software stack as if every category faces the same threat, which is a category error.

Read full answer in: The SaaSpocalypse Paradox

Are enterprise software earnings actually declining?

No. Q4 2025 earnings showed resilient or accelerating growth across major software names. ServiceNow grew subscription revenue 21%, Palantir grew 70.5%, Datadog grew 29%, and the sector is delivering 17% aggregate earnings growth in 2026. Every major name beat consensus estimates.

Read full answer in: The SaaSpocalypse Paradox

Could AI actually expand the software market rather than shrink it?

Goldman Sachs Research projects the application software market growing to $780 billion by 2030 at a 13% CAGR, with agents accounting for over 60% of the total. a16z argues the addressable market expands from roughly $350 billion in enterprise software spend to the $6 trillion white-collar services market if AI transitions from productivity tools to completing work itself, a roughly 20x TAM expansion.

Read full answer in: The SaaSpocalypse Paradox

What are the six layers of the enterprise AI agent stack?

The enterprise AI agent stack decomposes into six specialized layers: security, context, models, orchestration, agents, and interfaces. Each layer has different economics, rates of change, and sources of lock-in, which is why specialists tend to outperform monolithic platforms that try to own every layer.

Read full answer in: Don't Go Monolithic; The Agent Stack Is Stratifying

Why are AI models becoming commodities in enterprise settings?

Foundation models are converging toward commodity infrastructure because training costs scale roughly 2.4x per year, limiting frontier development to a handful of hyperscale organizations. With 37% of enterprises already using five or more models in production, different tasks demand different models, making single-provider lock-in the new version of single-cloud risk.

Read full answer in: Don't Go Monolithic; The Agent Stack Is Stratifying

What is an organizational world model in AI?

An organizational world model is a learned representation of how a specific company operates, built from accumulated process knowledge, interaction histories, and workflow patterns. Unlike model weights that any well-funded lab can approximate, this process-level understanding is genuinely unique to each organization and compounds in value with every agent execution.

Read full answer in: Don't Go Monolithic; The Agent Stack Is Stratifying

Why do many enterprise AI agent deployments fail?

Most enterprise AI failures stem from shallow context rather than poor models. Agents with access only to systems of record can retrieve the right documents but cannot reconstruct the reasoning processes humans follow. Without process knowledge capturing how decisions are made, agents produce outputs that are technically plausible but operationally useless.

Read full answer in: Don't Go Monolithic; The Agent Stack Is Stratifying

How should enterprises avoid vendor lock-in in the AI agent stack?

Enterprises should insist on open interoperability standards like Anthropic's Model Context Protocol (MCP) and Google's Agent-to-Agent protocol (A2A), treat their accumulated organizational context as portable IP not locked to any vendor, and architect each stack layer independently since they evolve at different rates.

Read full answer in: Don't Go Monolithic; The Agent Stack Is Stratifying

What is the enterprise AI context layer and why does it matter?

The enterprise AI context layer is the infrastructure that gives agents organizational understanding beyond raw data retrieval. It operates at two depths. Layer 1 connects data sources and retrieval pipelines (increasingly commoditized), while Layer 2 captures process-level knowledge like decision-making patterns, workflow sequences, and informal coordination. Layer 2 is where defensibility lives because it encodes how an organization actually operates, not just what it has recorded.

Read full answer in: Don't Go Monolithic; The Agent Stack Is Stratifying