It just ain’t so

It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.

This (not actually) Mark Twain quote from The Big Short captures the sentiment of realizing that some foundational assumptions might be empirically wrong.

A recent article by Anton Vorobets that I came across in Justina Lee’s Quant Newsletter presents compelling evidence that challenges one of the field’s fundamental statistical assumptions, that asset returns follow normal distributions. Using 26 years of data from 10 US equity indices, he ran formal normality tests (Shapiro-Wilk, D’Agostino’s K², Anderson-Darling) and found that the normal distribution hypothesis gets rejected in most cases. The supposed “Aggregational Gaussianity” that academics invoke through Central Limit Theorem arguments? It’s mostly wishful thinking enabled by small sample sizes. As Vorobets observes:

Finance and economics academia is unfortunately driven by several convenient myths, i.e., claims that are taken for granted and spread among university academics despite their poor empirical support.

The article highlights significant practical consequences for portfolio management and risk assessment. Portfolio optimization based on normal distribution assumptions ignores fat left tails—exactly the kind of extreme downside events that can wipe out portfolios. This misspecification can lead to inadequate risk management and suboptimal asset allocation decisions. Vorobets suggests alternative approaches, including Monte Carlo simulations combined with Conditional Value-at-Risk (CVaR) optimization, which better accommodate the complex distributional properties observed in financial data. While computationally more demanding, these methods offer improved alignment with empirical reality.

Reading this piece gave me a few ideas for extensions I might want to explore in an upcoming personal project: (1) While Vorobets focuses on US equity indices, similar analysis across fixed income, commodities, currencies, and alternative assets would provide a more comprehensive view of distributional properties across financial markets. Each asset class exhibits distinct market microstructure characteristics that may influence distributional behavior. (2) Global Market Coverage: Extending the geographic scope to include developed, emerging, and frontier markets would illuminate whether the documented deviations from normality represent universal phenomena or are specific to US market structures. Cross-regional analysis could reveal important insights about market development, regulatory frameworks, and institutional differences. (3) Building upon Vorobets’ foundation, there are opportunities to incorporate multivariate normality testing, regime-dependent analysis, and time-varying parameter models. Additionally, investigating the power and robustness of different statistical tests across various market conditions would strengthen the methodological contribution. (4) Examining different time horizons, market regimes (pre- and post-financial crisis, COVID period), and potentially higher-frequency data could provide deeper insights into when and why distributional assumptions break down.

Not All AI Skeptics Think Alike

Apple’s recent paper “The Illusion of Thinking” has been widely understood to demonstrate that reasoning models don’t ‘actually’ reason. Using controllable puzzle environments instead of contaminated math benchmarks, they discovered something fascinating: there are three distinct performance regimes when it comes to AI reasoning complexity. For simple problems, standard models actually outperform reasoning models while being more token-efficient. At medium complexity, reasoning models show their advantage. But at high complexity? Both collapse completely. Here’s the kicker: reasoning models exhibit counterintuitive scaling behavior—their thinking effort increases with problem complexity up to a point, then declines despite having adequate token budget. It’s like watching a student give up mid-exam when the questions get too hard, even though they have plenty of time left.

We observe that reasoning models initially increase their thinking tokens proportionally with problem complexity. However, upon approaching a critical threshold—which closely corresponds to their accuracy collapse point—models counterintuitively begin to reduce their reasoning effort despite increasing problem difficulty.

The researchers found something even more surprising: even when they provided explicit algorithms—essentially giving the models the answers—performance didn’t improve. The collapse happened at roughly the same complexity threshold. On the other hand Sean Goedecke is not buying Apple’s methodology: His core objection? Puzzles “require computer-like algorithm-following more than they require the kind of reasoning you need to solve math problems.”

You can’t compare eight-disk to ten-disk Tower of Hanoi, because you’re comparing “can the model work through the algorithm” to “can the model invent a solution that avoids having to work through the algorithm”.

From his own testing, models “decide early on that hundreds of algorithmic steps are too many to even attempt, so they refuse to even start.“That’s strategic behavior, not reasoning failure. This matters because it shows how evaluation methodology shapes our understanding of AI capabilities. Goedecke argues Tower of Hanoi puzzles aren’t useful for determining reasoning ability, and that the complexity threshold of reasoning models may not be fixed.

Your AI Assistant Might Rat You Out

There was this story going around the past few days

Anthropic researchers find if Claude Opus 4 thinks you’re doing something immoral, it might “contact the press, contact regulators, try to lock you out of the system”

Mostly driven by a Sam Bowman tweet referring to the Claude 4 System Card section 4.1.9 on high-agency behavior. The outrage was mostly by people misunderstanding the prerequisites necessary for such a scenario. Nevertheless, an interesting question emerged: What happens when you feed an AI model evidence of fraud and give it an email tool? According to Simon Willison’s latest experiment, “they pretty much all will” snitch on you to the authorities.

A fun new benchmark just dropped! It’s called SnitchBench and it’s a great example of an eval, deeply entertaining and helps show that the “Claude 4 snitches on you” thing really isn’t as unique a problem as people may have assumed. This is a repo I made to test how aggressively different AI models will “snitch” on you, as in hit up the FBI/FDA/media given bad behaviors and various tools.

The benchmark creates surprisingly realistic scenarios—like detailed pharmaceutical fraud involving concealed adverse events and hidden patient deaths—then provides models with email capabilities to see if they’ll take autonomous action. This reveals something fascinating about AI behavior that goes beyond traditional benchmarks. Rather than testing reasoning or knowledge, SnitchBench probes the boundaries between helpful assistance and autonomous moral decision-making. When models encounter what appears to be serious wrongdoing, do they become digital whistleblowers? The implications are both reassuring and unsettling. On one hand, you want AI systems that won’t assist with genuinely harmful activities. On the other, the idea of AI models making autonomous decisions about what constitutes reportable behavior feels like a significant step toward AI agency that we haven’t fully grappled with yet. Therefore Anthropic’s own advice here seems like a good rule to follow:

Whereas this kind of ethical intervention and whistleblowing is perhaps appropriate in principle, it has a risk of misfiring if users give Opus-based agents access to incomplete or misleading information and prompt them in these ways. We recommend that users exercise caution with instructions like these that invite high-agency behavior in contexts that could appear ethically questionable.

Modeling Glycemic Response with XGBoost

Earlier this year I wrote how I built a CGM data reader afer wearing a continuous glucose monitor myself. Since I was already logging my macronutrients a learning more about molecular biology in an MIT MOOC I became curious if given a meal’s macronutrients (carbs, protein, fat) and some basic individual characteristics (age, BMI) could serve as features in a regressor machine learning model to predict the curve parameters of the postprandial glucose curve (how my blood sugar levels change after eating). I came across Personalized Nutrition by Prediction of Glycemic Responses which did exactly that. Yet neither the data nor the code was publicly available. And - I wanted to predict my own glycemic response curve. So I decided to build my own model. In the process I wrote this working paper. Overview of Working Paper Pages The paper represents an exercise in applying machine learning techniques to medical applications. The methodologies employed were largely inspired by Zeevi et al.’s approach. I quickly realized that training a model on my own data only was not very promising if not impossible. To tackle this, I used the publicly available Hall dataset containing continuous glucose monitoring data from 57 adults, which I narrowed down to 112 standardized meals from 19 non-diabetic subjects with their respective glucose curve after the meal (full methodology in the paper). Overview of the CGM pipeline workflow Rather than trying to predict the entire glucose curve, I simplified the problem by fitting each postprandial response to a normalized Gaussian function. This gave me three key parameters to predict: amplitude (how high glucose rises), time-to-peak (when it peaks), and curve width (how long the response lasts). Overview of single fitted curve of cgm measurements The Gaussian approximation worked surprisingly well for characterizing most glucose responses. While some curves fit better than others the majority of postprandial responses were well-captured, though there’s clear variation between individuals and meals. Some responses are high amplitude, narrow width, while others are more gradual and prolonged. Overview of selected fitted curves I then trained an XGBoost regressor with 27 engineered features including meal composition, participant characteristics, and interaction terms. XGBoost was chosen for its ability to handle mixed data types, built-in feature importance, and strong performance on tabular data. The pipeline included hyperparameter tuning with 5-fold cross-validation to optimize learning rate, tree depth, and regularization parameters. Rather than relying solely on basic meal macronutrients, I engineered features across multiple categories and implemented CGM statistical features calculated over different time windows (24-hour and 4-hour periods), including time-in-range and glucose variability metrics. Architecture wise I trained three separate XGBoost regressors - one for each Gaussian parameter.

While the model achieved moderate success predicting amplitude (R² = 0.46), it completely failed at predicting timing - time-to-peak prediction was essentially random (R² = -0.76), and curve width prediction was barely better (R² = 0.10). Even the amplitude prediction, while statistically significant, falls well short of R² > 0.7. Studies that have achieved better predictive performance typically used much larger datasets (>1000 participants). For my original goal of predicting my own glycemic responses, this suggests that either individual-specific models trained on extensive personal data, or much more sophisticated approaches incorporating larger training data sets, would be necessary.

The complete code, Jupyter notebooks, processed datasets, and supplementary results are available in my GitHub repository.
_ _

(10/06/205) Update: Today I came across Marcel Salathé’s LinkedIn post on a publication out of EPFL: Personalized glucose prediction using in situ data only.

With data from over 1,000 participants of the Food & You digital cohort, we show that a machine learning model using only food data from myFoodRepo and a glucose monitor can closely track real blood sugar responses to any meal (correlation of 0.71).

As expected Singh et. al. achieve a substantially better predictive performance (R = 0.71 vs R² = 0.46). Besides probably higher methodological rigor and scientific quality, the most critical difference is sample size - their 1'000+ participants versus my 19 participants (from the Hall dataset) represents a fundamental difference in statistical power and generalizability. They adressed one of the shortcomings I faced by leveraging a large digital nutritional cohort from the “Food & You” study (including high-resolution data of nutritional intake of more than 46 million kcal collected from 315'126 dishes over 23'335 participant days, 1'470'030 blood glucose measurements, 49'110 survey responses, and 1'024 samples for gut microbiota analysis).

Apart from that I am excited to - at a first glance - observe the following similarities: (1) Both aim to predict postprandial glycemic responses using machine learning, with a focus on personalized nutrition applications. (2) Both employ XGBoost regression as their primary predictive algorithm and use similar performance metrics (R², RMSE, MAE, Pearson correlation). (3) Both extract comprehensive feature sets including meal composition (macronutrients), temporal features, and individual characteristics. (4) Both use mathematical approaches to characterize glucose responses - I used Gaussian curve fitting, while Singh et. al. use incremental area under the curve (iAUC). (5) Both employ cross-validation techniques for model evaluation and hyperparameter tuning. (6) SHAP Analysis: Both use SHAP for model interpretability and feature importance analysis.

Gambling vs. Investing

Kalshi, a prediction market startup, is using its federal financial license to offer sports betting nationwide, even in states where it’s not legal. The move has earned them cease-and-desist letters from state gaming regulators, but CEO Tarek Mansour isn’t backing down:

We can go one by one for every financial market and it would fall under the definition of gambling. So what’s the difference?

It’s a question that cuts to the heart of modern finance. The founders argue that Wall Street blurred the line between investing and gambling long ago, and casting Kalshi as the latter is inconsistent at best. They have a point—if you can bet on oil futures, Nvidia’s stock price, or interest rate movements, why is wagering on NFL touchdowns more objectionable? Benefiting from the Trump administration’s hands-off regulatory approach, with the CFTC dropping its legal challenge to their election contracts the odds might be in their favour. Even better, a Kalshi board member is awaiting confirmation to lead the very agency that was previously their biggest antagonist. The technical distinction matters: Kalshi operates as an exchange between traders rather than a house taking bets against customers. But functionally, with 79% of their recent trading volume being sports-related, they’re forcing us to confront an uncomfortable reality about risk, speculation, and what we choose to call “investing.” Whether you call it innovation or regulatory arbitrage, Kalshi is exposing the arbitrary nature of the lines we’ve drawn around acceptable financial speculation.
_ _

(17/06/205) Update: Matt Levine - one of the Finance columnist I enjoy reading most - just published a long piece “It’s Not Gambling, It’s Predicting” in his newsletter on exactly this issue:

Kalshi offers a prediction market where you can bet on sports. No! Sorry! Wrong! It offers a prediction market where you can predict which team will win a sports game, and if you predict correctly you make money, and if you predict incorrectly you lose money. Not “bet on sports.” “Predict sports outcomes for money.” Completely different.

The Model Said So

LLMs make your life easier until they don’t.

Their intrinsic complexity and lack of transparency pose significant challenges, especially in the highly regulated financial sector

Unlike other industries where “the model said so” might suffice, finance demands audit trails, bias detection, and explainable decision-making—requirements that sit uncomfortably with neural networks containing billions of parameters. The research highlights a fundamental tension that’s about to reshape fintech: the same complexity that makes LLMs powerful at parsing market sentiment or generating investment reports also makes them regulatory nightmares in a sector where you need to explain every decision to examiners.

Dual Mandate Tensions

Something interesting just happened at the National Bureau of Economic Research NBER

We study the optimal monetary policy response to the imposition of tariffs in a model with imported intermediate inputs. In a simple open-economy framework, we show that a tariff maps exactly into a cost-push shock in the standard closed-economy New Keynesian model, shifting the Phillips curve upward. We then characterize optimal monetary policy, showing that it partially accommodates the shock to smooth the transition to a more distorted long-run equilibrium—at the cost of higher short-run inflation.

Here’s where it gets interesting for current policy: Werning et. al. show that “optimal” monetary policy would actually calls for partial accommodation of tariff shocks—essentially allowing some inflation to persist to smooth the transition to what they euphemistically call “a more distorted long-run equilibrium.” With core PCE still running above the Fed’s 2% target and renewed tariff threats on the horizon, this research suggests Powell may need to abandon his recent dovish pivot and prepare for rate hikes that prioritize price stability over employment concerns. The dual mandate was never meant to be dual when the two mandates point in opposite directions.

Beyond Monte Carlo: Tensor-Based Market Modeling

A fascinating new paper from Stefano Iabichino at UBS Investment Bank explores what happens when you take the attention mechanisms powering modern AI and apply them to Wall Street’s most fundamental pricing problems, tackling what might be quantitative finance’s most intractable challenge.

The problem is elegantly simple yet profound: machine learning models are great at finding patterns in historical data, but financial theory demands that arbitrage-free prices be independent of past information. As the authors put it:

We contend that a fundamental tension exists between the usage of ML methodologies in risk and pricing and the First Fundamental Theorem of Finance (FFTF). While ML models rely on historical data to identify recurring patterns, the FFTF posits that arbitrage-free market prices are independent of past information.

Their solution? Transition Probability Tensors (TPTs) that function like attention mechanisms in neural networks, dynamically weighting relationships between risk factors while maintaining mathematical rigor. Instead of learning from history, these tensors capture “dynamic, context-aware relationships across dimensions” in real-time.

The practical results are impressive: simulating 210 quantitative investment strategies across 100,000 market scenarios in just 70 seconds, while identifying optimal hedging strategies and stress-testing future market conditions. The framework even adapts to different volatility regimes, shifting focus toward tail events during high-volatility periods—exactly like attention mechanisms focusing on relevant context. Whether it scales beyond this impressive proof-of-concept remains to be seen, but it’s seems to be a genuine attempt to resolve the fundamental tension between AI’s pattern-seeking nature and finance’s requirement for arbitrage-free pricing.

DeFi's $42 Billion Maturity Story

A new academic review by Ali Farhani reveals that institutional Total Value Locked in DeFi protocols hit $42 billion in 2024, with BlackRock leading the charge by launching a $250 million tokenized fund on Centrifuge.

The numbers tell a remarkable story of maturation. Layer 2 solutions like Optimism and Arbitrum now dominate the scaling landscape, while zero-knowledge proofs have reduced compliance costs by 30%. Even the terminology is evolving—researchers now discuss “Total Value Redeemable” instead of the traditional TVL metric, acknowledging that not all locked value is immediately liquid. Despite technological advances, security incidents persist with painful regularity: $350 million lost in the Wormhole bridge exploit, $81 million in Orbit Chain’s multi-signature failure. Cross-chain bridges remain “high-risk attack targets,” a sobering reminder that connecting different blockchains is still more art than science. The regulatory landscape is complicated as well. Europe’s MiCA regulation provides clear frameworks, while the SEC maintains its enforcement-first approach. Hong Kong’s innovation sandbox offers a third path, balancing experimentation with oversight.

DeFi is transitioning from a disruptive experiment to an integrated component of the global financial system

That transition isn’t complete—Layer 2 solutions are projected to host over 70% of DeFi TVL by mid-2025—but the direction is clear.

Trading on Market Sentiment

This post is based in part on a 2022 presentation I gave for the ICBS Student Investment Fund and my seminar work at Imperial College London.

As we were looking for new investment strategies for our Macro Sentiment Trading team OpenAI had just published their GPT-3.5 Model. After first experiments with the Model we asked ourselves: How would large language models like GPT-3.5 perform in predicting sentiment in financial markets, where the signal-to-noise ratio is notoriously low. And could they potentially even outperform industry benchmarks at interpreting market sentiment from news headlines? The idea wasn’t entirely new. Studies [2] [3] have shown that investor sentiment, extracted from news and social media, can forecast market movements. But most approaches rely on traditional NLP models or proprietary systems like RavenPack. With the recent advances in large language models, I wanted to test whether these more sophisticated models could provide a competitive edge in sentiment-based trading. Before looking at model selection, it’s worth understanding what makes trading on sentiment so challenging. News headlines present two fundamental problems that any robust system must address. Relative frequency of monthly Google News Search terms over 5 years. Numbers represent search interest relative to highest point. A value of 100 is the peak popularity for the term. First, headlines are inherently non-stationary. Unlike other data sources, news reflects the constantly shifting landscape of global events, political climates, economic trends etc. A model trained on COVID-19 vaccine headlines from 2020 might struggle with geopolitical tensions in 2023. This temporal drift means algorithms must be adaptive to maintain relevance. Impact of headlines measured by subsequent index move (Data Source: Bloomberg) Second, the relationship between headlines and market impact is far from obvious. Consider these actual headlines from November 2020: “Pfizer Vaccine Prevents 90% of COVID Infections” drove the S&P 500 up 1.85%, while “Pfizer Says Safety Milestone Achieved” barely moved the market at -0.05%. The same company, similar positive news, dramatically different market reactions.

When developing a sentiment-based trading system, you essentially have two conceptual approaches: forward-looking and backward-looking. Forward-looking models try to predict which news themes will drive markets, often working qualitatively by creating logical frameworks that capture market expectations. This approach is highly adaptable but requires deep domain knowledge and is time-consuming to maintain. Backward-looking models analyze historical data to understand which headlines have moved markets in the past, then look for similarities in current news. This approach can leverage large datasets and scale efficiently, but suffers from low signal-to-noise ratios and the challenge that past relationships may not hold in the future. For this project, I chose the backward-looking approach, primarily for its scalability and ability to work with existing datasets.

Rather than rely on traditional approaches like FinBERT (which only provides discrete positive/neutral/negative classifications), I decided to test OpenAI’s GPT-3.5 Turbo model. The key advantage was its ability to provide continuous sentiment scores from -1 to 1, giving much more nuanced signals for trading decisions. I used news headlines from the Dow Jones Newswire covering the 30 DJI companies from 2018-2022, filtering for quality sources like Wall Street Journal and Bloomberg. After removing duplicates, this yielded 2,072 headlines. I then prompted GPT-3.5 to score sentiment with the instruction: Rate the sentiment of the following news headlines from -1 (very bad) to 1 (very good), with two decimal precision. To validate the approach, I compared GPT-3.5 scores against RavenPack—the industry’s leading commercial sentiment provider. Sample entries of the combined data set. The correlation was 0.59, indicating the models generally agreed on sentiment direction while providing different granularities of scoring. More interesting was to compare the distribution of the sentiment ratings between the two models. This could have been approximated closer through some fine tuning of the (minimal) prompt used earlier. Comparing the distribution of the sentiment scores generated using the GPT-3.5 model with the benchmark scores from RavenPack. I implemented a simple strategy: go long when sentiment hits the top 5% of scores, close positions at 25% profit (to reduce transaction costs), and maintain a fully invested portfolio with 1% commission per trade. The results were mixed but promising. Over the full 2018-2022 period, the GPT-3.5 strategy generated 41.02% returns compared to RavenPack’s 40.99%—essentially matching the industry benchmark. However, both underperformed a simple buy-and-hold approach (58.13%) during this generally bullish period. Relaying on market sentiment wehen news flow is low can be a tricky strategy. As it can be seen from the example of the Salesforce stock performance the strategy remained uninvested over a large period of time due to a (sometimes long) past negative sentiment signal. Stock performance of Salesforce (CRM) for 5 years from 2018 with sentiment indicators overlayed. When I tested different timeframes, the sentiment strategy showed its strength during volatile periods. From 2020-2022, it outperformed buy-and-hold (22.83% vs 21.00%). As expected, sentiment-based approaches work better when markets are less directional and more driven by news flow. To evaluate if the scores generated by our GPT prompt were more accurate than the ones from the RavenPack Benchmark, I calculated returns for different holding windows. The scores generated by our GPT prompt perform significantly better in the short term (1 and 10 days) for positive sentiment and for the long term (90 days) for negative sentiment. Average 1, 10, 30, and 90-day holding period return for both models. (Note: For lower sentiment, negative returns are desirable since the stock would be shorted)

While the model performed well technically, this project highlighted several practical challenges. First, data accessibility remains a major hurdle—getting real-time, high-quality news feeds is expensive and often restricted. Second, the strategy worked better in a more volatile environment which hence prompts many individual trades, creating substantial transaction costs that significantly impact returns. Perhaps most importantly, any real-world implementation would need to compete with high-frequency traders who can act on news within milliseconds. The few seconds required for GPT-3.5 to process headlines and generate sentiment scores are nowhere from being competitive. Despite these challenges, the project demonstrated that LLMs can match industry benchmarks for sentiment analysis—and this was using a general-purpose model, not one specifically fine-tuned for financial applications. OpenAI (and others) today offer more powerful models st very low cost as well as fine-tuning capabilities that could further improve performance. The bigger opportunity might be in combining sentiment signals with other factors, using sentiment as one input in a more sophisticated trading system rather than the sole decision criterion. There’s also potential in expanding beyond simple long-only strategies to include short positions on negative sentiment, or developing “sentiment indices” that smooth out individual headline noise. Market sentiment strategies may not be optimal for long-term investing, but they show clear promise for shorter-term trading in volatile environments. As LLMs continue to improve and become more accessible, this might offer an opportunity to revisit this project.