Damodaran on Gold's 2025 Surge

Aswath Damodaran’s latest analysis into gold’s 2025 surge walks through gold’s contradictory nature as a collectible rather than an asset with cash flows, showing why it’s impossible to “value” gold in the traditional sense, yet entirely possible to understand what drives its pricing.

Even though gold is outperforming almost all other assets in my portfolio this year I fundamentally don’t like holding it. I’m a Buffett disciple: gold is an unproductive asset that generates no earnings, pays no dividends.

But Damodaran’s framework helps to understand why tolerating it anyway might be worth it. It’s less an investment than insurance against the tail risks of hyperinflation and catastrophic market dislocations, scenarios where correlations go to one and traditional diversification fails. The dissonance between what I believe intellectually (productive assets compound wealth) and what I’m actually doing (holding some gold anyway) probably says more about 2025’s macro uncertainty than any principled investment thesis.

Damodaran’s blog linked in this post’s title.

The Bicycle Needs Riding to be Understood

Some concepts are easy to grasp in the abstract. Boiling water: apply heat and wait. Others you really need to try. You only think you understand how a bicycle works, until you learn to ride one.

You should write an LLM agent—not because they’re revolutionary, but because the bicycle needs riding to be understood. Having built agents myself, Ptacek’s central insight resonates: the behavior surprises in specific ways, particularly around how models scale effort with complexity before inexplicably retreating.

Ptacek walks through building a functioning agent in roughly 50 lines of Python, demonstrating how an LLM with ping access autonomously chose multiple Google endpoints without explicit instruction, a moment that crystallizes both promise and unpredictability. His broader point matches my experience: context engineering isn’t mystical but straightforward programming—managing token budgets, orchestrating sub-agents, balancing explicit loops against emergent behavior. The open problems in agent design—titrating nondeterminism, connecting to ground truth, allocating tokens—remain remarkably accessible to individual experimentation, each iteration taking minutes rather than requiring institutional resources.

Blog by Thomas Ptacek linked in this post’s title.

AI models as standalone P&Ls

Microsoft reported earnings for the quarter ended Sept. […] buried in its financial filings were a couple of passages suggesting that OpenAI suffered a net loss of $11.5 billion or more during the quarter.

For every dollar of revenue, they’re allegedly spending roughly $5 to deliver the product. What initially sounds like a joke about “making it up on volume” points to a more fundamental problem facing OpenAI and its competitors. AI companies are locked into continuously releasing more powerful (and expensive) models. If they stop, open-source alternatives will catch up and offer equivalent capabilities at substantially lower costs. This creates an uncomfortable dynamic. If your current model requires spending more than you earn just to fund the next generation, the path to profitability becomes unclear—perhaps impossible.

Anthropic CEO Dario Amodei (everybody’s favorite AI CEO) recently offered a different perspective in a conversation with Stripe co-founder John Collison. He argues that treating each model as an independent business unit reveals a different picture than conventional accounting suggests.

Let’s say in 2023, you train a model that costs $100 million, and then you deploy it in 2024 and it makes $200 million of revenue.

So far, this looks profitable, a solid 2x return on the training investment. But here’s where it gets complicated.

Meanwhile, because of the scaling laws, in 2024, you also train a model that costs $1 billion. If you look in a conventional way at the profit and loss of the company you’ve lost $100 million the first year, you’ve lost $800 million the second year, and you’ve lost $8 billion in the third year, so it looks like it’s getting worse and worse.

The pattern continues:

In 2025, you get $2 billion of revenue from that $1 billion model trained the previous year.

Again, viewed in isolation, this model returned 2x its training cost.

And you spend $10 billion to train the model for the following year.

The losses appear to accelerate dramatically, from $100 million to $800 million to $8 billion.

This is where Amodei’s reframing becomes interesting.

If you consider each model to be a company, the model that was trained in 2023 was profitable. You paid $100 million and then it made $200 million of revenue."

He also acknowledges there are inference costs (the actual computing expenses of running the model for users) but suggests these don’t fundamentally change the picture in his simplified example. His core argument:

If every model was a company, the model in this example is actually profitable. What’s going on is that at the same time as you’re reaping the benefits from one company, you’re founding another company that’s much more expensive and requires much more upfront R&D investment.

This is essentially an argument that AI companies are building a portfolio of profitable products, but the accounting makes it look terrible because each successive “product” costs 10x more than the last to develop. The losses stem from overlapping these profitable cycles while exponentially increasing investment scale. But this framework only works if two critical assumptions hold: (1) Each model consistently returns roughly 2x its training cost in revenue, and (2) The improvements from spending 10x more justify that investment—meaning customers will pay enough more for the better model to maintain that 2x return.

Amodei outlines two ways this resolves:

So the way that it’s going to shake out is this will keep going up until the numbers go very large and the models can’t get larger, and, you know, then it’ll be a large, very profitable business.

In this first scenario, scaling hits physical or practical limits. You’ve maxed out available compute, data, or capability improvements. Training costs plateau because you literally can’t build a meaningfully larger model. At that point, companies stop needing exponentially larger investments and begin harvesting profits from their final-generation models. The second scenario is less optimistic:

Or at some point the models will stop getting better, right? The march to AGI will be halted for some reason.

If the improvements stop delivering proportional returns before reaching natural limits, companies face what Amodei calls overhang.

And then perhaps there’ll be some overhang, so there’ll be a one-time, ‘Oh man, we spent a lot of money and we didn’t get anything for it,’ and then the business returns to whatever scale it was at.

What Amodei’s framework doesn’t directly address is the open-source problem. If training Model C costs $10 billion but open-source alternatives reach comparable performance six months later, that 2x return window might not materialize. The entire argument depends on maintaining a significant capability lead that customers will pay premium prices for. There’s also the question of whether the 2x return assumption holds as models become more expensive. The jump from $100 million to $1 billion to $10 billion in training costs assumes that customers will consistently value the improvements enough to double revenue.

Working with Models

There was this “I work with Models” joke which I first heard years ago from an analyst working on a valuation model (see my previous post). I guess it has become more relevant than ever:

This monograph presents the core principles that have guided the development of diffusion models, tracing their origins and showing how diverse formulations arise from shared mathematical ideas. Diffusion modeling starts by defining a forward process that gradually corrupts data into noise, linking the data distribution to a simple prior through a continuum of intermediate distributions.

If you want to get into this topic in the first place, be sure to check out Stefano Ermon’s CS236 Deep Generative Models Course. Lecture recordings of the full course can also be found on YouTube.

Original paper linked in this post’s title.

Pozsar's Bretton Woods III: Three Years Later [2/2]

Start by reading Pozsar’s Bretton Woods III: The Framework [1/2]

Now, what actually happened in the three years since Pozsar published this framework? (1) Dollar reserve diversification is happening, but gradual: Foreign central bank Treasury holdings declined from peaks exceeding $7.5 trillion to levels below $7 trillion. This represents steady diversification away from dollar-denominated assets, though not a dramatic collapse. (2) Gold has performed strongly: From roughly $1'900/oz when Pozsar published his dispatches to peaks above $4'000/oz today, gold has appreciated substantially, consistent with increased demand for “outside money.” (3) Alternative payment systems are developing: Various nations continue building infrastructure for non-dollar trade settlement. While these systems remain in preliminary stages rather than fully operational alternatives to SWIFT, development timelines could speed up following specific triggering events. (4) The dollar itself has remained strong: Perhaps surprisingly given predictions of dollar weakness, the dollar achieved its best performance against a basket of major currencies since 2015 in 2024. The DXY index (which tracks the dollar against major trading partners) fell about 11% this year, marking the end of this decade-long rally. (5) Commodity collateral is increasingly important: Research on commodities as collateral shows that under capital controls and collateral constraints, investors import commodities and pledge them as collateral. Higher collateral demands increase commodity prices and affect the inventory-convenience yield relationship.

One of Pozsar’s more provocative arguments concerns China’s strategic options. With approximately $3 trillion in foreign exchange reserves heavily weighted toward dollars and Treasuries, China faces the same calculus as any holder of large dollar reserves: what is the risk these could be frozen? Pozsar outlined two theoretical paths for China: (1) Sell Treasuries to purchase commodities directly (especially discounted Russian commodities), thereby converting financial claims into physical resources. (2) Print renminbi to purchase commodities, creating a “eurorenminbi” market parallel to the eurodollar system.

The first option provides inflation control for China (securing physical resources) while potentially raising yields in Treasury markets. The second option represents a more fundamental challenge to dollar dominance, the birth of an alternative offshore currency market backed by commodity reserves rather than financial reserves. In practice, we’ve seen elements of both. China has increased commodity imports from Russia substantially. The internationalization of the renminbi has progressed, though more slowly than some expected, constrained by China’s capital controls and the relative underdevelopment of its financial markets compared to dollar markets.

Regardless of whether “Bretton Woods III” emerges exactly as described, several insights from Pozsar’s framework appear durable. (1) Central banks control the nominal domain, not the real domain: Monetary policy can influence demand, manage liquidity, and stabilize financial markets. It cannot conjure physical resources, build supply chains, or speed up energy transitions. (2) Physical infrastructure matters for financial markets: The number of VLCCs, the capacity of the Suez Canal, the efficiency of port facilities, these real-world constraints bind financial flows. Understanding the infrastructure underlying commodity movements provides insight into funding market dynamics. (3) Collateralization is changing: The trend toward commodity-backed finance, warehouse receipt systems, and physical collateral reflects both technological improvements (better monitoring and verification) and strategic shifts (diversification away from pure financial claims). As the FSB noted in 2023, banks play a vital role in the commodities ecosystem, providing not just credit but clearing services and intermediation between commodity firms and central counterparties. (4) Geopolitical risk affects monetary arrangements: The weaponization of reserve assets, however justified in specific circumstances, changes the risk calculation for all reserve holders. This doesn’t mean immediate de-dollarization, but it does mean persistent, gradual diversification.

So what can we take from this for today: (1) Funding market stresses may be more persistent: If commodity traders require more financing for longer durations due to less efficient trade routes, and if banks face balance sheet constraints from regulatory requirements or QT, term funding premia may remain elevated relative to overnight rates. The FRA-OIS spread, the spread between forward rate agreements and overnight indexed swaps, becomes a window into these dynamics. (2) Cross-currency basis swaps signal more than rate differentials: Persistent deviations from covered interest parity reflect structural factors: global trade reconfiguration, reserve diversification, and the changing geography of dollar funding demand. These aren’t temporary anomalies to be arbitraged away but potentially persistent features of the new system. (3) Commodity volatility has monetary policy implications that are difficult to manage: When commodity prices surge due to supply disruptions rather than demand strength, central banks face an ugly tradeoff: tighten policy to control inflation headlines while risking recession, or accommodate the price shock and accept higher inflation. Unlike demand-driven inflation, supply-driven commodity inflation doesn’t respond well to rate hikes. (4) Infrastructure bottlenecks matter: Just as G-SIB constraints around year-end affect money market functioning, shipping capacity constraints and logistical bottlenecks affect commodity prices and, through them, inflation. Monitoring the “real plumbing,” freight rates, port congestion, pipeline capacity, provides early warning signals for inflation pressures.

Perhaps the most valuable way to engage with Bretton Woods III is not as a prediction to be validated or refuted, but as a framework for thinking about the intersection of geopolitics, commodities, and money. It forces attention to questions that are easy to overlook: (a) How do physical constraints on commodity flows affect financial market plumbing? (b) What risks do reserve holders face that aren’t captured in traditional financial risk metrics? (c) Where do central bank powers end and other forms of power, military, diplomatic, infrastructural, begin? (d) How do the “real” and “nominal” domains interact during periods of stress?

The current environment shows elements consistent with the framework: gradual reserve diversification, persistent commodity volatility, funding market stresses related to term commodity financing, and increasing focus on supply chain resilience over pure efficiency. It also shows elements inconsistent with it: dollar strength, the slow pace of alternative systems, and the resilience of dollar-based financial infrastructure. What seems clear is that the assumptions underlying Bretton Woods II, that dollar reserves are nearly risk-free, that globalized supply chains should be optimized for cost above all else, that central banks can manage most monetary disturbances, are being questioned in ways they weren’t five years ago. Whether that questioning leads to a new monetary order or simply a modified version of the current one remains to be seen. But Pozsar’s framework provides a useful lens for watching the process unfold, connecting developments in commodity markets, funding markets, and geopolitical arrangements into a coherent story about how the global financial system actually works.

Pozsar’s full Money Notes series is available through his website, and Perry Mehrling’s course Economics of Money and Banking provides excellent background on the “money view” that underpins this analysis.

Pozsar's Bretton Woods III: The Framework [1/2]

In March 2022, as Western nations imposed unprecedented sanctions following Russia’s invasion of Ukraine, Zoltan Pozsar published a series of dispatches that would become some of the most discussed pieces in financial markets that year. The core thesis was stark: we were witnessing the birth of “Bretton Woods III,” a fundamental shift in how the global monetary system operates. Nearly three years later, with more data on de-dollarization trends, commodity market dynamics, and structural changes in global trade, it’s worth revisiting this framework.

I first heard of Pozsar at Credit Suisse during the 2019 repo market disruptions and the March 2020 funding crisis, when his framework explained market dynamics in a way I have never seen it before. Before joining Credit Suisse as a short-term rate strategist, Pozsar spent years at the Federal Reserve (where he created the map of the shadow banking system, which prompted the G20 to initiate regulatory measures in this area) and the U.S. Treasury. His work focuses on what he calls the “plumbing” of financial markets, the often-overlooked mechanisms through which money actually flows through the system. His intellectual approach draws heavily from Perry Mehrling’s “money view,” which treats money as having four distinct prices rather than being a simple unit of account.

Pozsar’s Bretton Woods III framework rests on a straightforward distinction. “Inside money” refers to claims on institutions: Treasury securities, bank deposits, central bank reserves. “Outside money” refers to commodities like gold, oil, wheat, metals that have intrinsic value independent of any institution’s promise.

Bretton Woods I (1944-1971) was backed by gold, outside money. The U.S. dollar was convertible to gold at a fixed rate, and other currencies were pegged to the dollar. When this system collapsed in 1971, Bretton Woods II emerged: a system where dollars were backed by U.S. Treasury securities, inside money. Countries accumulated dollar reserves, primarily in the form of Treasuries, to support their currencies and facilitate international trade.

Pozsar’s argument: the moment Western nations froze Russian foreign exchange reserves, the assumed risk-free nature of these dollar holdings changed fundamentally. What had been viewed as having negligible credit risk suddenly carried confiscation risk. For any country potentially facing future sanctions, the calculus of holding large dollar reserve positions shifted. Hence Bretton Woods III: a system where countries increasingly prefer holding reserves in the form of commodities and gold, outside money that cannot be frozen by another government’s decision.

To understand Pozsar’s analysis, we need to understand his analytical framework. Perry Mehrling teaches that money has four prices: (1) Par: The one-for-one exchangeability of different types of money. Your bank deposit should convert to cash at par. Money market fund shares should trade at $1. When par breaks, as it did in 2008 when money market funds “broke the buck,” the payments system itself is threatened. (2) Interest: The price of future money versus money today. This is the domain of overnight rates, term funding rates, and the various “bases” (spreads) between different funding markets. When covered interest parity breaks down and cross-currency basis swaps widen, it signals stress in the ability to transform one currency into another over time. (3) Exchange rate: The price of foreign money. How many yen or euros does a dollar buy? Fixed exchange rate regimes can collapse when countries lack sufficient reserves, as happened across Southeast Asia in 1997. (4) Price level: The price of commodities in terms of money. How much does oil, wheat, or copper cost? This determines not just headline inflation but feeds through into the price of virtually everything in the economy.

Central banks have powerful tools for managing the first three prices. They can provide liquidity to preserve par, influence interest rates through policy, and intervene in foreign exchange markets. But the fourth price, the price level, particularly when driven by commodity supply shocks, is far harder to control. As Pozsar puts it: “You can print money, but not oil to heat or wheat to eat.”

Pozsar’s contribution was to extend Mehrling’s framework into what he calls the “real domain,” the physical infrastructure underlying commodity flows. For each of the three non-commodity prices of money, there’s a parallel in commodity markets: (1) Foreign exchange ↔ Foreign cargo: Just as you exchange currencies, you exchange dollars for foreign-sourced commodities. (2) Interest (time value of money) ↔ Shipping: Just as lending has a time dimension, moving commodities from port A to port B takes time and requires financing. (3) Par (stability) ↔ Protection: Just as central banks protect the convertibility of different money forms, military and diplomatic power protects commodity shipping routes.

This mapping reveals something important: commodity markets have their own “plumbing” that works parallel to financial plumbing. And when this real infrastructure gets disrupted, it creates stresses that purely monetary policy cannot resolve.

One of the most concrete examples in Pozsar’s March 2022 dispatches illustrates this intersection between finance and physical reality. Consider what happens when Russian oil exports to Europe are disrupted and must be rerouted to Asia. Previously, Russian oil traveled roughly 1-2 weeks from Baltic ports to European refineries on Aframax carriers (ships carrying about 600,000 barrels). The financing required was relatively short-term, a week or two. Post-sanctions, the same oil must travel to Asian buyers. But the Baltic ports can’t accommodate Very Large Crude Carriers (VLCCs), which carry 2 million barrels. So the oil must first be loaded onto Aframax vessels, sailed to a transfer point, transferred ship-to-ship to VLCCs, then shipped to Asia, a journey of roughly four months.

The same volume of oil, moved the same distance globally, now requires: (a) More ships (Aframax vessels for initial transport plus VLCCs for long-haul). (b) More time (4 months instead of 1-2 weeks). (c) More financing (commodity traders must borrow for much longer terms). (d) More capital tied up by banks (longer-duration loans against volatile commodities).

Pozsar estimated this rerouting alone would encumber approximately 80 VLCCs, roughly 10% of global VLCC capacity, in permanent use. The financial implication: banks’ liquidity coverage ratios (LCRs) increase because they’re extending more term credit to finance these longer shipping durations. When commodity trading requires more financing for longer durations, it competes with other demands for bank balance sheet. If this happens simultaneously with quantitative tightening (QT), when the central bank is draining reserves from the system, funding stresses become more likely. As Pozsar noted: “In 2019, o/n repo rates popped because banks got to LCR and they stopped lending reserves. In 2022, term credit to commodity traders may dry up because QT will soon begin in an environment where banks’ LCR needs are going up, not down.”

One aspect of the framework that deserves more attention relates to dollar funding for non-U.S. banks. According to recent Dallas Fed research, banks headquartered outside the United States hold approximately $16 trillion in U.S. dollar assets, comparable in magnitude to the $22 trillion held by U.S.-based institutions. The critical difference: U.S. banks have access to the Federal Reserve’s emergency liquidity facilities during periods of stress. Foreign banks do not have a U.S. dollar lender of last resort. During the COVID-19 crisis, the Fed expanded dollar swap lines to foreign central banks precisely to address this vulnerability, about $450 billion, roughly one-sixth of the Fed’s balance sheet expansion in early 2020. The structural dependency on dollar funding creates ongoing vulnerabilities. When dollars become scarce globally, whether due to Fed policy tightening, shifts in risk sentiment, or disruptions in commodity financing, foreign banks face balance sheet pressures that can amplify stress. The covered interest parity violations that Pozsar frequently discusses reflect these frictions: direct dollar borrowing and synthetic dollar borrowing through FX swaps theoretically should cost the same, but in practice, significant basis spreads persist.

Continue reading Pozsar’s Bretton Woods III: Three Years Later [2/2]

Everything Is a DCF Model

A brilliant piece of writing from Michael Mauboussin and Dan Callahan at Morgan Stanley that was formative in what I personally believe when it comes to valuation.

[…] we want to suggest the mantra “everything is a DCF model.” The point is that whenever investors value a stake in a cash-generating asset, they should recognize that they are using a discounted cash flow (DCF) model. […] The value of those businesses is the present value of the cash they can distribute to their owners. This suggests a mindset that is very different from that of a speculator, who buys a stock in anticipation that it will go up without reference to its value. Investors and speculators have always coexisted in markets, and the behavior of many market participants is a blend of the two.

Original paper linked in this post’s title.

LLM Helped Discover a New Cancer Therapy Pathway

Google gets a lot of scrutiny for some of their work in other domains; nevertheless, it’s fair to appreciate that they continue to put major resources behind using AI to accelerate therapeutic discovery. The model and resources are open access and available to the research community.

How C2S-Scale 27B works: A major challenge in cancer immunotherapy is that many tumors are “cold” — invisible to the body’s immune system. A key strategy to make them “hot” is to force them to display immune-triggering signals through a process called antigen presentation. We gave our new C2S-Scale 27B model a task: Find a drug that acts as a conditional amplifier, one that would boost the immune signal only in a specific “immune-context-positive” environment where low levels of interferon (a key immune-signaling protein) were already present, but inadequate to induce antigen presentation on their own.

From their press release:

C2S-Scale generated a novel hypothesis about cancer cellular behavior and we have since confirmed its prediction with experimental validation in living cells. This discovery reveals a promising new pathway for developing therapies to fight cancer.

For a 27B model, that’s really really neat! And on a more general note, scaling seems to deliver:

This work raised a critical question: Does a larger model just get better at existing tasks, or can it acquire entirely new capabilities? The true promise of scaling lies in the creation of new ideas, and the discovery of the unknown.

On a more critical note, it would be interesting to see whether this model can perform any better than existing simple linear models for predicting gene expression interactions.

Original bioRxiv paper linked in this post’s title.

The State of AI Report 2025

This year’s rendition of The State of AI Report is making rounds on LinkedIn (yes, LinkedIn the place where the great E = MC2 + AI equation was “discovered”).

Worth keeping in mind this is made by Nathan Benaich the Founder of Air Street Capital, a venture capital firm investing in “AI-first companies”, so obviously comes with a lot of bias. It’s also a relatively small, open survey, with 1'200 “AI practitioners” surveyed. An example of the bias:

shows that 95% of professionals now use AI at work or home

It’s obvious that 95% of professionals don’t use AI at work or home, and these results are heavily skewed. Nevertheless, the slide deck has a nice comprehensive review of research headlines over the year:

  • OpenAI, Google, Anthropic, and DeepSeek all releasing reasoning models capable of planning, verification, and self-correction.
  • China’s AI systems closed the gap to establish the country as a credible #2 in global AI capability.
  • 44% of U.S. businesses now paying for AI tools (up from just 5% in 2023), average contracts reaching $530'000, and 95% of surveyed professionals using AI regularly—while the capability-to-price ratio doubles every 6-8 months.
  • Multi-gigawatt data centers backed by sovereign funds compete globally, with power supply and land becoming as critical as GPUs.

    (13/10/2025) Update: I was just reminded that a sample size of 1'200 is highly statistically significant, even for a national-level poll. The main concern here, however, remains, which is potential selection bias, likely stemming from the fact that participation is driven by people who want to take the survey. It’s unclear how much this bias affects the results, but purely in terms of sample size, it is more than sufficient.

Popular Science Nobel Prize

Mary E. Brunkow just won the Nobel Prize in Physiology or Medicine 2025 for their (she was jointly awarded) discoveries concerning peripheral immune tolerance.

Brunkow, meanwhile, got the news of her prize from an AP photographer who came to her Seattle home in the early hours of the morning. She said she had ignored the earlier call from the Nobel Committee. “My phone rang and I saw a number from Sweden and thought: ‘That’s just, that’s spam of some sort.’”

The reason why this is worth sharing (besides their fantastic work) is that the Nobel Prize always has a “Popular Science” publication with nice layman descriptions of what was discovered and why it was important. It’s an in-depth look at the discoveries at a university level I would say. Worth a read!