There’s a company who spent almost $50,000 because an agent went into an infinite loop and they forgot about it for a month.
It had no failures and I guess no one was monitoring these costs. It’s nice that people do write about that in the database as well. After it happened, they said: watch out for infinite loops. Watch out for cascading tool failures. Watch out for silent failures where the agent reports it has succeeded when it didn’t!
We Discuss:
* Why the most successful teams are ripping out and rebuilding their agent systems every few weeks as models improve, and why over-engineering now creates technical debt you can’t afford later;
* The $50,000 infinite loop disaster and why “silent failures” are the biggest risk in production: agents confidently report success while spiraling into expensive mistakes;
* How ELIOS built emergency voice agents with sub-400ms response times by aggressively throwing away context every few seconds, and why these extreme patterns are becoming standard practice;
* Why DoorDash uses a three-tier agent architecture (manager, progress tracker, and specialists) with a persistent workspace that lets agents collaborate across hours or days;
* Why simple text files and markdown are emerging as the best “continual learning” layer: human-readable memory that persists across sessions without fine-tuning models;
* The 100-to-1 problem: for every useful output, tool-calling agents generate 100 tokens of noise, and the three tactics (reduce, offload, isolate) teams use to manage it;
* Why companies are choosing Gemini Flash for document processing and Opus for long reasoning chains, and how to match models to your actual usage patterns;
* The debate over vector databases versus simple grep and cat, and why giving agents standard command-line tools often beats complex APIs;
* What “re-architect” as a job title reveals about the shift from 70% scaffolding / 30% model to 90% model / 10% scaffolding, and why knowing when to rip things out is the may be the most important skill today.
You can also find the full episode on Spotify, Apple Podcasts, and YouTube.
You can also interact directly with the transcript here in NotebookLM: If you do so, let us know anything you find in the comments!
👉 Want to learn more about Building AI-Powered Software? Check out our Building AI Applications course. It’s a live cohort with hands on exercises and office hours. Our final cohort starts March 10, 2026. Here is a 25% discount code for readers. 👈
Show Notes Links
* Alex Strick van Linschoten on LinkedIn
* Alex Strick van Linschoten on Twitter/X
* LLMOps Database Dataset on Hugging Face
* Hugo’s MCP Server for LLMOps Database
* Alex’s Blog: What 1,200+ Production Deployments Reveal About LLMOps in 2025
* Previous Episode: Practical Lessons from 750 Real-World LLM Deployments
* Previous Episode: Tales from 400 LLM Deployments
* Context Rot Research by Chroma
* Hugo’s Post: AI Agent Harness - 3 Principles for Context Engineering
* Hugo’s Post: The Rise of Agentic Search
* Episode with Nick Moy: The Post-Coding Era
* Hugo’s Personal Podcast Prep Skill Gist
* Claude Tool Search Documentation
* Gastown on GitHub (Steve Yegge)
* Welcome to Gastown by Steve Yegge
* ZenML - Open Source MLOps & LLMOps Framework
* Vanishing Gradients on YouTube
* Watch the podcast livestream on YouTube
* Join the final cohort of our Building AI Applications course in March, 2026 (25% off for listeners)
👉 Want to learn more about Building AI-Powered Software? Check out our Building AI Applications course. It’s a live cohort with hands on exercises and office hours. Our final cohort starts March 10, 2026. Here is a 25% discount code for readers. 👈