Listen

Description

Listen on Apple, Spotify, or Google Podcasts.

Market Update📈📉

Quick Hits:

* Asset Class Reversal: We are witnessing a historical anomaly. Gold (+54%) is currently the best-performing major asset of 2025, while Bitcoin (-1%) sits as the worst. This is the direct inverse of 2013 and a dynamic we haven’t seen before in a calendar year.

* S&P 500 Technicals: The S&P 500 closed below its 50-day moving average for the first time since April 30, officially ending the 5th longest uptrend since 1950.

* Institutional Signal: despite Bitcoin’s price lag, institutional adoption is heating up. Harvard’s endowment reported in its Q3 13F filings that the iShares Bitcoin ETF (IBIT) is now its largest position and biggest increase—a significant stamp of approval from the endowment world.

The AI “Non-Bubble” and Gemini 3

The narrative regarding Artificial Intelligence has shifted fundamentally over the last few weeks. We are moving from a phase of “inevitable euphoria” to a phase of “verification.”

1. The “Non-Bubble” Disappointment Ironically, both AI bulls and bears are disappointed. Bulls wanted a parabolic, “melt-up” bubble (think 1997-1999) to maximize short-term gains. Bears wanted a bubble so it would burst. Instead, we are in a “non-bubble”: valuations are reasonable (NVDA ~20x), margins are rich, and we are early in the supercycle.

2. The Catalyst: “Sam’s Splurge” (SS) The turning point was Sam Altman’s $1.4T infrastructure plan. Instead of fueling excitement, this massive capital requirement opened “Pandora’s Box,” shifting investor sentiment from blind optimism to scrutiny.

* Credit Risk: The sheer scale of the plan (nearly the size of the private credit market) forced lenders to reprice AI-linked risk. We saw this immediately in widening CDS spreads for Oracle and Coreweave.

* Government & Feasibility: The plan invited government scrutiny regarding energy grids, water usage, and land rights. It dragged long-term risks (post-2028) into the present day.

* Too Big to Fail: The market realized that OpenAI is no longer just a startup; it is a systemic risk. If they fail to execute on $1.4T, they drag the ecosystem down with them.

3. The Market Reaction: BSS vs. ASS We are moving from BSS (Before Sam’s Splurge) to ASS (After Sam’s Splurge).

* Profitability over Narrative: As uncertainty rises, the market is favoring profitability. Companies with tangible earnings (Memory/DRAM) are outperforming, while pure narrative stocks (Nuclear, Quantum) are rolling over.

* The “Giddy” Phase is Over: The straight-line ascent is likely done. We are entering a healthier, more mature phase where stock-picking, fundamentals, and idiosyncrasies matter more than sector-wide hype.

Bottom Line: The AI trade isn’t broken; it is simply growing up.

This is supported by todays release of Gemini 3:

The long-awaited Gemini 3 finally launched yesterday, and the entire industry seemed to have been holding its breath for it. Based on the benchmarks released so far, the model largely meets the high expectations that had built up beforehand. It resets records across multiple mainstream leaderboards, especially in long-horizon reasoning, native multimodal alignment, and cross-modality inference. On many benchmarks the performance gap over competitors is not small, rekindling optimism that large models may genuinely break through long-chain task complexity and real-world application depth. These capabilities are precisely where the next stage of AI deployment will happen—far beyond simple chat or text generation.

What’s interesting is that Gemini 3’s improvement doesn’t come from fancy RL tricks or alignment methods, but almost entirely from stronger pre-training. Multiple sources, including Google employees, confirmed this point: this round of progress is, quite literally, “built on brute-force compute.”

Podcast & YouTube Recommendations🎙

* Plain English with a fun episode on Ai and Work:

* The BG2 Podcast Mentioned in the podcast:

Best Links of The Week🔮

* Warren Buffetts final letter to shareholders. Enjoy Retirement GOAT - Berkshire

* Felix Stocker has a nice essay on mining and society. Which sounds like the topic of the one humanities class a geological engineering major would grudgingly sit through, but which is actually a pretty pivotal question: many modern conveniences—especially including the batteries, windmills, electric motors, and solar panels we use to reduce our reliance on emissions-heavy sources—require inputs that necessarily have to be dug up out of the ground, often in an environmentally-destructive way. When there’s a debate over a mine, it’s not so much big business versus the environment as it is environmentalism versus climate change and energy security

* Mark Humphries in Generative Historyhas a fascinating piece on Gemini decoding centuries-old handwritten records in a very human-like way, by using context clues in the document to infer missing information. In a sense, the LLM’s transcription was more than 100% correct, because it identified and fixed an ambiguity in the historical record (even expert human readers will occasionally miss something like this). One of the unique axes on which models perform well is that they don’t get bored the way a person would, and are willing to check their work to make sure it’s logically consistent even when the task is just to transcribe text.

* And on a similar note, this Dwarkesh Patel and Dylan Patel interview with Satya Nadella has an interesting side note on legibility: Nadella notes that AI makes it easier to move information from an Excel file into a real database, and that means it’s easier to join across different datasets. Cheap determinism is a complement to more flexible but uncertain LLMs. Future historians will have a much easier time trawling through historical data, at least as long as someone pays to store it.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit reformedmillennials.substack.com