In August 2011, Marc Andreessen wrote “Why Software Is Eating the World”, an essay about how software was transforming industries, disrupting traditional businesses, and revolutionizing the global economy. Recently, Benedict Evans, a former a16z partner, gave a presentation on generative AI three years after ChatGPT’s launch. His argument in short:

we know this matters, but we don’t know how.

In this article I will try to explain why I find his framing fascinating but incomplete. Evans structures technology history in cycles. Every 10-15 years, the industry reorganizes around a new platform: mainframes (1960s-70s), PCs (1980s), web (1990s), smartphones (2000s-2010s). Each shift pulls all innovation, investment, and company creation into its orbit. Generative AI appears to be the next platform shift, or it could break the cycle entirely. The range of outcomes spans from “just more software” to a single unified intelligence that handles everything. The pattern recognition is smart, but I think the current evidence points more clearly toward commoditization than Evans suggests, with value flowing up the stack rather than to model providers.

The hyperscalers are spending historic amounts. In 2025, Microsoft, Google, Amazon, and Meta will invest roughly $400 billion in AI infrastructure, more than global telecommunications capex. Microsoft now spends over 30% of revenue on capex, double what Verizon spends. What has this produced? Models that are simultaneously more capable and less defensible. When ChatGPT launched in November 2022, OpenAI had a massive quality advantage. Today, dozens of models cluster around similar performance. DeepSeek proved that anyone with $500 million can build a frontier model. Costs have collapsed. OpenAI’s API pricing has dropped by 97% since GPT-3’s launch, and every year brings an order of magnitude decline in the price of a given output.

Now, $500 million is still an enormous barrier. Only a few dozen entities globally can deploy that capital with acceptable risk. GPT-4’s performance on complex reasoning tasks, Claude’s extended context windows of up to 200,000 tokens, Gemini’s multimodal capabilities, these represent genuine breakthroughs. But the economic moat isn’t obvious to me (yet).

Evans uses an extended metaphor: automation that works disappears. In the 1950s, automatic elevators were AI. Today they’re just elevators. As Larry Tesler noted in 1970,

AI is whatever machines can’t do yet. Once it works, it’s just software.

The question: will LLMs follow this pattern, or is this different?

Current deployment shows clear winners but also real constraints. Software development has seen massive adoption, with GitHub reporting that 92% of developers now use AI coding tools. Marketing has found immediate uses generating ad assets at scale. Customer support has attracted investment, though with the caveat that LLMs produce plausible answers, not necessarily correct ones. Beyond these areas, adoption looks scattered. Deloitte surveys from June 2025 show that roughly 20% of U.S. consumers use generative AI chatbots daily, with another 34% using them weekly or monthly. Enterprise deployment is further behind. McKinsey data shows most AI “agents” remain in pilot or experimental stages. A quarter of CIOs have launched something. Forty percent don’t expect production deployment until 2026 or later.

But I think here’s where Evans’ “we don’t know” approach misses something important. Consulting firms are booking billions in AI contracts right now. Accenture alone expects $3 billion in GenAI bookings for fiscal 2025. The revenue isn’t coming from the models. It’s coming from integration projects, change management, and process redesign. The pitch is simple: your competitors are moving on this, you can’t afford to wait. If your competitors are investing and you’re not, you risk being left behind. If everyone invests and AI delivers modest gains, you’ve maintained relative position. If everyone invests and AI delivers nothing, you’ve wasted money but haven’t lost competitive ground. Evans notes that cloud adoption took 20 years to reach 30% of enterprise workloads and is still growing. New technology always takes longer than advocates expect. His most useful analogy is spreadsheets. VisiCalc in the late 1970s transformed accounting. If you were an accountant, you had to have it. If you were a lawyer, you thought “that’s nice for my accountant.” ChatGPT today has the same dynamic. Certain people with certain jobs find it immediately essential. Everyone else sees a demo and doesn’t know what to do with the blank prompt. This is right, and it suggests we’re early. But it doesn’t tell us where value will accumulate.

The standard pattern for deploying technology goes in stages: (1) Absorb it (make it a feature, automate obvious tasks). (2) Innovate (create new products, unbundle incumbents). (3) Disrupt (redefine what the market is). We’re mostly in stage one. Stage two is happening in pockets. Y Combinator’s recent batches are overwhelmingly AI-focused, betting on thousands of new companies unbundling existing software (startups are attacking specific enterprise problems like converting COBOL to Java or reconfiguring telco billing systems). Stage three remains speculative. From an economic perspective, there’s the automation question: do you do the same work with fewer people, or more work with the same people? This echoes debates about labor-augmenting technical change in economics. Companies whose competitive advantage was “we can afford to hire enough people to do this” face real pressure. Companies whose advantage was unique data, customer relationships, or distribution may get stronger. This is standard economic analysis of labor-augmenting technical change, and it probably holds here too.

Continue reading Is AI Really Eating the World? AGI, Networks, and Value [2/2]