Listen

Description

Agentic AI programming is what happens when coding assistants stop acting like autocomplete and start collaborating on real work. In this episode, we cut through the hype and incentives to define “agentic,” then get hands-on with how tools like Cursor, Claude Code, and LangChain actually behave inside an established codebase. Our guest, Matt Makai, now VP of Developer Relations at DigitalOcean, creator of Full Stack Python and Plushcap, shares hard-won tactics. We unpack what breaks, from brittle “generate a bunch of tests” requests to agents amplifying technical debt and uneven design patterns. Plus, we also discuss a sane git workflow for AI-sized diffs. You’ll hear practical Claude tips, why developers write more bugs when typing less, and where open source agents are headed. Hint: The destination is humans as editors of systems, not just typists of code.



Episode sponsors



Posit

Talk Python Courses


Matt Makai: linkedin.com



Plushcap Developer Content Analytics: plushcap.com

DigitalOcean Gradient AI Platform: digitalocean.com

DigitalOcean YouTube Channel: youtube.com

Why Generative AI Coding Tools and Agents Do Not Work for Me: blog.miguelgrinberg.com

AI Changes Everything: lucumr.pocoo.org

Claude Code - 47 Pro Tips in 9 Minutes: youtube.com

Cursor AI Code Editor: cursor.com

JetBrains Junie: jetbrains.com

Claude Code by Anthropic: anthropic.com

Full Stack Python: fullstackpython.com



Watch this episode on YouTube: youtube.com

Episode #517 deep-dive: talkpython.fm/517

Episode transcripts: talkpython.fm



Theme Song: Developer Rap

🥁 Served in a Flask 🎸: talkpython.fm/flasksong



---== Don't be a stranger ==---

YouTube: youtube.com/@talkpython



Bluesky: @talkpython.fm

Mastodon: @talkpython@fosstodon.org

X.com: @talkpython



Michael on Bluesky: @mkennedy.codes

Michael on Mastodon: @mkennedy@fosstodon.org

Michael on X.com: @mkennedy