In this episode of How Hard Can It Be? host Alex Schlager sits down with Ken Huang, a leading authority on Generative AI governance, cybersecurity, and risk management. Ken is a CSA Fellow, NIST GenAI contributor, co-author of the OWASP Top 10 for LLMs, and an international instructor on GenAI security best practices. He also recently co-authored a book on AI governance and cybersecurity, shaping the global conversation on AI safety, compliance, and red teaming.
We dive into:
• Why securing LLMs and Agentic AI is uniquely challenging • The upcoming CSA paper on Agentic AI Red Teaming
• How organizations can stress-test AI systems beyond traditional pen testing
• The essential frameworks and standards for AI governance (NIST, OWASP, CSA)
• What the global community needs to align on for AI safety and policy If you’re a security leader, AI builder, or policymaker, this episode will give you a front-row look into the future of AI governance and cybersecurity.
👉 Don’t forget to like, comment, and subscribe for more deep dives into AI, security, and technology. 🔗 Learn More About Ken Huang’s Work Explore more of Ken’s insights, research, and publications at https://distributedapps.ai