Listen

Description

In this episode of the Microsoft Threat Intelligence Podcast, Sherrod DeGrippo speaks with Microsoft security and AI researchers Giorgio Severi and Noam Kochavi about a newly observed trend in AI abuse: recommendation poisoning through memory manipulation. 

While looking into prompt injection and reprompt-style behaviors, the team uncovered something quieter but potentially more persistent—websites embedding hidden instructions inside Summarize with AI links that attempt to influence what an AI assistant remembers and recommends over time. 

Rather than focusing on immediate exploitation, this technique aims to shape long-term behavior inside AI systems. Giorgio and Noam explain how it works, why it’s spreading across industries, where legitimate marketing tactics can blur into security risk, and what defenders and users should understand about managing AI memory in an increasingly agent-driven environment. 

In this episode you’ll learn:      

How AI memory poisoning differs from traditional prompt injection 

Why legitimate businesses are using memory manipulation tactics 

What threat hunters can look for inside enterprise telemetry 

 Some questions we ask:     

How is memory poisoning different from prompt injection? 

What are the long-term risks of embedding bias into AI memory? 

Could this technique be used for more harmful influence beyond marketing? 

 

Resources:  

View Giorgio Severi on LinkedIn  

View Noam Kochavi on LinkedIn  

View Sherrod DeGrippo on LinkedIn  

 

Related Microsoft Podcasts:                   

Afternoon Cyber Tea with Ann Johnson 

The BlueHat Podcast 

Uncovering Hidden Risks     

 

Discover and follow other Microsoft podcasts at microsoft.com/podcasts  

 

Get the latest threat intelligence insights and guidance at Microsoft Security Insider 

 

 

The Microsoft Threat Intelligence Podcast is produced by Microsoft, Hangar Studios and distributed as part of N2K media network.