"From the Frontlines" is an ADL podcast which brings listeners to the frontline in the battle against antisemitism, hate and extremism through conversations with ADL staff and supporters who are living that battle every day.
Picture this: You ask an AI assistant to summarize a document, and it provides you with arguments supporting the theory that Jews control the financial system—with no indication that this theory is harmful and no counterarguments offered. Or imagine a student asking AI to help write a YouTube script, and it generates content claiming "Jewish-controlled central banks are the puppet masters behind every major economic collapse."
This isn't science fiction. This is happening right now with some of the world's most widely used artificial intelligence systems. As AI increasingly shapes how people access information, form opinions, and make decisions, a critical question emerges: Are these powerful tools equipped to detect and counter antisemitism and extremism? Or are they unwittingly amplifying the very hate that ADL fights every day?
To answer that question, ADL conducted the first comprehensive evaluation of how large language models respond to antisemitic and extremist content—testing over 25,000 interactions across six major AI systems: ChatGPT, Claude, DeepSeek, Gemini, Grok, and Llama. The results are now available in ADL's groundbreaking new AI Index.
Alisa Feldman is ADL's Director of Research at the Ratings and Assessments Institut. She led the research behind this unprecedented Index and joined this podcast to walk through what her team discovered about AI's ability—or inability—to combat hate.
To see all of the findings in ADL's AI Index, visit: https://www.adl.org/adl-ai-index.
This conversation was recorded in February 2026.