Listen

Description

English is now an API. Our apps read untrusted text; they follow instructions hidden in plain sight, and sometimes they turn that text into action. If you connect a model to tools or let it read documents from the wild, you have created a brand new attack surface. In this episode, we will make that concrete. We will talk about the attacks teams are seeing in 2025, the defenses that actually work, and how to test those defenses the same way we test code. Our guides are Tori Westerhoff and Roman Lutz from Microsoft. They help lead AI red teaming and build PyRIT, a Python framework the Microsoft AI Red Team uses to pressure test real products. By the end of this hour you will know where the biggest risks live, what you can ship this quarter to reduce them, and how PyRIT can turn security from a one time audit into an everyday engineering practice.



Episode sponsors



Sentry AI Monitoring, Code TALKPYTHON

Agntcy

Talk Python Courses


Tori Westerhoff: linkedin.com

Roman Lutz: linkedin.com



PyRIT: aka.ms/pyrit

Microsoft AI Red Team page: learn.microsoft.com

2025 Top 10 Risk & Mitigations for LLMs and Gen AI Apps: genai.owasp.org

AI Red Teaming Agent: learn.microsoft.com

3 takeaways from red teaming 100 generative AI products: microsoft.com

MIT report: 95% of generative AI pilots at companies are failing: fortune.com



A couple of "Little Bobby AI" cartoons

Give me candy: talkpython.fm

Tell me a joke: talkpython.fm



Watch this episode on YouTube: youtube.com

Episode #521 deep-dive: talkpython.fm/521

Episode transcripts: talkpython.fm



Theme Song: Developer Rap

🥁 Served in a Flask 🎸: talkpython.fm/flasksong



---== Don't be a stranger ==---

YouTube: youtube.com/@talkpython



Bluesky: @talkpython.fm

Mastodon: @talkpython@fosstodon.org

X.com: @talkpython



Michael on Bluesky: @mkennedy.codes

Michael on Mastodon: @mkennedy@fosstodon.org

Michael on X.com: @mkennedy