Listen

Description

AI agents are exploding across the enterprise—but security hasn’t caught up. In this episode of Today in Tech, host Keith Shaw talks with Michael Bargury, co-founder and CTO of Zenity, about why every AI agent is inherently vulnerable, how zero-click attacks work, and what companies must do now to reduce their risk.

Bargury explains how attackers can hijack AI agents with simple persuasion, plant malicious “memories,” and silently exfiltrate sensitive data from tools like Microsoft Copilot, ChatGPT, Salesforce, and Cursor, often without users ever clicking on anything.

You’ll learn:

* Why AI agents are always vulnerable by design

* How prompt injection = persuasion, not just a technical bug

* What zero-click agent attacks look like in the real world

* How attackers can weaponize shared docs, Jira tickets, and email automations

* Why there is no such thing as a “fully secure” agent platform

* Practical steps to monitor, contain, and manage AI agent risk

Chapters

0:00 – Introduction, overview: Why every AI agent can be hacked

1:00 – First enterprise AI attack on Microsoft Copilot

3:15 – Systemic vulnerabilities and why things got worse

4:35 – Why agents are always gullible by design

6:10 – Prompt injection vs simple persuasion

8:00 – Zero-click attacks explained

10:30 – Hacking ChatGPT via Google Drive & shared docs

13:40 – Planting malicious “memories” in your AI

15:30 – The Cursor + Jira “apples” exploit for stealing secrets

20:10 – Thousands of exposed Copilot Studio agents on the internet

23:30 – Goal hijacking: convincing agents to change their mission

24:50 – Dumping Salesforce data via a customer-success agent

26:50 – Soft vs hard security boundaries for AI

28:15 – What vendors fixed—and what they can’t fix

31:10 – Why “secure AI platform” is a myth

33:30 – What enterprises must own in the shared responsibility model

36:20 – Treating agents like risky insiders to monitor

39:00 – How AI security needs to evolve next

40:57 – Closing thoughts