Listen

Cast

Description

Send a text

Ready for a reality check on AI security? We invited Cisco cybersecurity expert Katherine McNamara to dig into where large language models actually break: from prompt injection and over-permissioned plugins to reckless “vibe-coded” apps that leak IDs, photos, and entire backends. The stories are real, the stakes are high, and the fixes are concrete. We trace how AI sprawl mirrors the worst of early IoT—weak defaults, poor isolation, and a stampede to integrate models into billing, HR, and support without guardrails—only this time the blast radius includes your customer data and your legal exposure.

We talk through the human factor first. Written policies won’t stop someone from pasting a pen test report into a public chatbot. DLP helps, but hybrid work and BYOD stretch defenses thin. Then we move to the core threat model: public and private models are targets; datasets can be poisoned; plugins often ship with admin-level scopes; and a clever prompt can trick an LLM into disclosing chat histories, creating new accounts, or modifying orders. Courts have already treated chatbots as company representatives, binding businesses to their outputs—another reason to treat every integration like an untrusted user with strict least privilege.

It’s not all doom. Used well, AI gives security operations superpowers: correlating signals across dozens of tools, reducing alert fatigue, and surfacing lateral movement. The path forward is discipline, not denial. Fence models on the network. Prefer read-only to write. Gate plugins behind narrowly scoped APIs. Vet datasets for backdoors. Red-team prompts as seriously as you pen test code. And educate stakeholders with live demos so they see why these controls matter. We also unpack the shaky economics—GPU costs, rising consumer fatigue, hype-fueled projects with little ROI—and why that pressure can erode privacy if teams aren’t vigilant.

If you’re building with LLMs or trying to rein them in, this conversation gives you a practical map: what to allow, what to block, and how to make AI useful without turning your stack into an attack surface. Subscribe, share with a teammate who ships integrations, and drop a review with the one guardrail you’ll implement this quarter.


Connect with our Guest:
https://x.com/kmcnam1
https://www.linkedin.com/in/katherinermcnamara/

Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/

Check out the Monthly Cloud Networking News
https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

Visit our website and subscribe: https://www.cables2clouds.com/
Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
Follow us on YouTube: https://www.youtube.com/@cables2clouds/
Follow us on TikTok: https://www.tiktok.com/@cables2clouds
Merch Store: https://store.cables2clouds.com/
Join the Discord Study group: https://artofneteng.com/iaatj