Understanding AI security threats before they become your next crisis
On this episode of Razorwire, I explore the emerging frontier of AI security with leading experts Jonathan Care and Martin Voelk. We examine the latest risks, show you how adversaries are exploiting AI systems and share practical advice for professionals working with these rapidly advancing technologies.
We move past the marketing speak to reveal how attackers are using generative AI, what it really takes to test these complex systems and what the rise of agentic, self-operating AI means for defenders. Security leaders, penetration testers and anyone implementing business technology need to understand these threats before committing to new AI solutions.
This conversation addresses real incidents, examines practical realities and highlights why many enterprises are dangerously unprepared for what's ahead in AI security.
Key Topics
- Inside the Mind of the Attacker: Learn how both ethical hackers and financially motivated criminals are already using AI to automate attacks, spread misinformation and create new vulnerabilities. Martin and Jonathan share examples of prompt injection, data poisoning and “model jailbreaking” - all tactics reshaping the cyber threat landscape right now.
- Pen Testing AI: What’s Different and What’s Still the Same: Go behind the scenes with insights into penetration testing for large language models and agentic AI. The episode discusses fresh attack surfaces, why classic testing skills are still vital and the new OWASP Top 10 for LLMs. If you’re considering buying AI-powered tools, take away concrete advice on how to stress-test these systems before attackers do.
- Business Risk, Legal Headaches and What to Demand from Vendors: With AI now touching everything from customer bots giving dodgy medical advice to autonomous agents able to cause chaos, the conversation gives practical advice about reputational, legal and operational risks. Listen for the must-ask questions every business should take to their vendors as well as new regulatory requirements that mean robust AI testing can’t be left as an afterthought.
If you want to stay ahead of AI and cybersecurity developments and avoid building tomorrow's biggest headache, this episode is essential listening.
AI Model Bias Debate:
" 77% of enterprises are reporting at least one AI related security incident. 62% of enterprises lack any dedicated testing programme.”
Jonathan Care
Listen to this episode on your favourite podcasting platform: https://razorwire.captivate.fm/listen
In this episode, we covered the following topics:
- Test Your AI Before Attackers Do - With 77% of enterprises already hit by AI security incidents but 62% lacking testing programmes, discover what specific vulnerabilities to check for and how to implement proper AI red teaming.
- Stop AI Hallucinations From Damaging Your Business - Understand how AI systems fabricate information and create legal liability, plus practical steps to identify and mitigate these risks before they affect customers or operations.
- Protect Against Medical and Legal AI Disasters - Learn from real cases where AI gave dangerous advice and created legal obligations, including what liability questions you need to address with vendors and internal teams.
- Secure Agentic AI That Can Take Real Actions - Discover why AI agents that can invoke APIs, modify data