Listen

Description

Understanding AI security threats before they become your next crisis

On this episode of Razorwire, I explore the emerging frontier of AI security with leading experts Jonathan Care and Martin Voelk. We examine the latest risks, show you how adversaries are exploiting AI systems and share practical advice for professionals working with these rapidly advancing technologies.

We move past the marketing speak to reveal how attackers are using generative AI, what it really takes to test these complex systems and what the rise of agentic, self-operating AI means for defenders. Security leaders, penetration testers and anyone implementing business technology need to understand these threats before committing to new AI solutions.

This conversation addresses real incidents, examines practical realities and highlights why many enterprises are dangerously unprepared for what's ahead in AI security.

Key Topics

  1. Inside the Mind of the Attacker: Learn how both ethical hackers and financially motivated criminals are already using AI to automate attacks, spread misinformation and create new vulnerabilities. Martin and Jonathan share examples of prompt injection, data poisoning and “model jailbreaking” - all tactics reshaping the cyber threat landscape right now.
  2. Pen Testing AI: What’s Different and What’s Still the Same: Go behind the scenes with insights into penetration testing for large language models and agentic AI. The episode discusses fresh attack surfaces, why classic testing skills are still vital and the new OWASP Top 10 for LLMs. If you’re considering buying AI-powered tools, take away concrete advice on how to stress-test these systems before attackers do.
  3. Business Risk, Legal Headaches and What to Demand from Vendors: With AI now touching everything from customer bots giving dodgy medical advice to autonomous agents able to cause chaos, the conversation gives practical advice about reputational, legal and operational risks. Listen for the must-ask questions every business should take to their vendors as well as new regulatory requirements that mean robust AI testing can’t be left as an afterthought.

If you want to stay ahead of AI and cybersecurity developments and avoid building tomorrow's biggest headache, this episode is essential listening.

AI Model Bias Debate: 

" 77% of enterprises are reporting at least one AI related security incident. 62% of enterprises lack any dedicated testing programme.”

Jonathan Care

Listen to this episode on your favourite podcasting platform: https://razorwire.captivate.fm/listen

In this episode, we covered the following topics: