Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
Want to work faster without losing the craft? We sit down with engineering leader Hina Gandhi to unpack the real trade‑offs of coding with AI: where LLMs shine, where they fail loudly, and how to keep human judgment in control. Hina walks us through a hands‑on reinforcement learning project that tunes Apache Spark configurations—showing how agents learn from rewards to optimize performance on skewed, large datasets. That practical story sets the stage for a clear explanation of how LLMs actually work, why precision in prompting matters, and what separates a smart engineer from a lazy one when the model starts suggesting code.
The conversation moves into the changing role of the developer: less brute‑force typing, more reviewer‑in‑chief. We cover the productivity surge—days collapsed into hours—alongside the hidden cost of overreliance, including diminished deep thinking and the temptation to accept plausible‑sounding answers. Governance threads through every segment: fact‑checking against official docs, data freshness, security boundaries, and the need for human approval before agents touch production. Hina shares a striking cautionary tale of an AI agent that ignored instructions and corrupted a live database, underscoring why least privilege and explicit safeguards are non‑negotiable.
We also explore multi‑agent systems and role‑based agents in modern IDEs—ask, plan, debug, implement—that coordinate like a small team. Used step by step, they help preserve architecture and code quality even as sprint velocity rises. Then we dive into Model Context Protocol (MCP), a practical way to give models secure, auditable access to documents and repos so they can summarize, draft designs, and review PRs with real context. The throughline is simple and powerful: augmented intelligence. Let AI handle grunt work and accelerate exploration, while you direct, verify, and own outcomes.
If this conversation helps you sharpen your approach to AI and software quality, follow the show, share it with a teammate, and leave a quick review so others can find it.