Listen

Description

Episode 10 of Before the Commit dives into three main themes: the AI investment bubble, Claude Code’s AI-powered security review tool, and AI security vulnerabilities like RAG-based attacks — closing with speculation about OpenAI’s Sora 2 video generator and the future of generative media.

Danny and Dustin open by comparing today’s AI investment surge to the 2008 mortgage and 2000 dot-com bubbles. Venture capitalists, they note, over-allocated funds chasing quick returns, assuming AI would replace human labor rapidly. In reality, AI delivers productivity augmentation, not full automation.
They describe a likely market correction — as speculative investors pull out, valuations will drop before stabilizing around sustainable use cases like developer tools. This mirrors natural boom-and-bust cycles where “true believers” reinvest at the bottom.

Key factors driving a pullback:

The “Tool of the Week” spotlights Anthropic’s Claude Code Security Reviewer, a GitHub Action that performs AI-assisted code security analysis. It reviews pull requests for OWASP-style vulnerabilities, posting contextual comments.

Highlights:

The hosts emphasize that this exemplifies how AI augments, not replaces, security engineers — introducing new “sensors” for software integrity.

In the Kill’em Chain segment, they examine the MITRE ATLAS “Morris II” worm, a zero-click RAG-based attack that spreads through AI systems ingesting malicious email content.
By embedding hostile prompts into ingested data, attackers can manipulate LLMs to exfiltrate private information or replicate across retrieval-augmented systems.

They discuss defensive concepts like:

The hosts close with reflections on OpenAI’s Sora 2 video model, which has stunned users with lifelike outputs and raised copyright debates.
OpenAI reportedly allows copyrighted content unless creators opt out manually, sparking comparisons to the 1990s hip-hop sampling wars. They wonder whether AI firms are effectively “too big to fail,” given massive state-level investments and national-security implications.

Philosophical questions arise:

They end humorously — “With humanity, the answer to every question is yes” — previewing next week’s episode on Facebook’s LLMs, OpenAI’s “NAN killer”, and side-channel LLM data leaks.