Look for any podcast host, guest or anyone
Showing episodes and shows of

Alan Aqrawi

Shows

AI Safety - Paper DigestAI Safety - Paper DigestAnthropic's Best-of-N: Cracking Frontier AI Across ModalitiesIn this special christmas episode, we delve into "Best-of-N Jailbreaking," a powerful new black-box algorithm that demonstrates the vulnerabilities of cutting-edge AI systems. This approach works by sampling numerous augmented prompts - like shuffled or capitalized text - until a harmful response is elicited. Discover how Best-of-N (BoN) Jailbreaking achieves: 89% Attack Success Rates (ASR) on GPT-4o and 78% ASR on Claude 3.5 Sonnet with 10,000 prompts. Success in bypassing advanced defenses on both closed-source and open-source models. Cross-modality attacks on vision, audio, and multimodal AI systems like GPT-4o and Gemini 1.5 Pro. We’ll al...2024-12-2512 minAI Safety - Paper DigestAI Safety - Paper DigestAuto-Rewards & Multi-Step RL for Diverse AI Attacks by OpenAIIn this episode, we explore the latest advancements in automated red teaming from OpenAI, presented in the paper "Diverse and Effective Red Teaming with Auto-generated Rewards and Multi-step Reinforcement Learning." Automated red teaming has become essential for discovering rare failures and generating challenging test cases for large language models (LLMs). This paper tackles a core challenge: how to ensure attacks are both diverse and effective. We dive into their two-step approach: Generating Diverse Attack Goals using LLMs with tailored prompts and rule-based rewards (RBRs). Training an RL Attacker with multi-step reinforcement learning to optimize for both su...2024-11-3011 minAI Safety - Paper DigestAI Safety - Paper DigestBattle of the Scanners: Top Red Teaming Frameworks for LLMsIn this episode, we explore the findings from "Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis." As large language models (LLMs) are integrated into more applications, so do the security risks they pose, including information leaks and jailbreak attacks. This study examines four major open-source vulnerability scanners - Garak, Giskard, PyRIT, and CyberSecEval - evaluating their effectiveness and reliability in detecting these risks. We’ll discuss the unique features of each tool, uncover key gaps in their reliability, and share strategic recommendations for organizations looking to bolster their red-teaming efforts. Join us to understand how th...2024-11-0414 minAI Safety - Paper DigestAI Safety - Paper DigestWatermarking LLM Output: SynthID by DeepMindIn this episode, we delve into the groundbreaking watermarking technology presented in the paper "Scalable Watermarking for Identifying Large Language Model Outputs," published in Nature. SynthID-Text, a new watermarking scheme developed for large-scale production systems, preserves text quality while enabling high detection accuracy for synthetic content. We explore how this technology tackles the challenges of text watermarking without affecting LLM performance or training, and how it’s being implemented across millions of AI-generated outputs. Join us as we discuss how SynthID-Text could reshape the future of synthetic content detection and ensure responsible use of large language models. Pa...2024-10-2412 minAI Safety - Paper DigestAI Safety - Paper DigestOpen Source Red Teaming: PyRIT by MicrosoftIn this episode, we dive into PyRIT, the open-source toolkit developed by Microsoft for red teaming and security risk identification in generative AI systems. PyRIT offers a model-agnostic framework that enables red teamers to detect novel risks, harms, and jailbreaks in both single- and multi-modal AI models. We’ll explore how this cutting-edge tool is shaping the future of AI security and its practical applications in securing generative AI against emerging threats. Paper (preprint): Lopez Munoz, Gary D., et al. "PyRIT: A Framework for Security Risk Identification and Red Teaming in Generative AI Systems." (2024). arXiv. Di...2024-10-0810 minAI Safety - Paper DigestAI Safety - Paper DigestJailbreaking GPT o1: STCA AttackThis podcast, "Jailbreaking GPT o1, " explores how the GPT o1 series, known for its advanced "slow-thinking" abilities, can be manipulated into generating disallowed content like hate speech through a novel attack method, the Single-Turn Crescendo Attack (STCA), which effectively bypasses GPT o1's safety protocols by leveraging the AI's learned language patterns and its step-by-step reasoning process. Paper (⁠preprint): Aqrawi, Alan and Arian Abbasi. “Well, that escalated quickly: The Single-Turn Crescendo Attack (STCA).” (2024). TechRxiv. Disclaimer: This podcast was generated using Google's NotebookLM AI. While the summary aims to provide an overview, it is rec...2024-10-0708 minAI Safety - Paper DigestAI Safety - Paper DigestThe Attack Atlas by IBM ResearchThis episode explores the intricate world of red-teaming generative AI models as discussed in the paper "Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI." We'll dive into the emerging vulnerabilities as LLMs are increasingly integrated into real-world applications and the evolving tactics of adversarial attacks. Our conversation will center around the "Attack Atlas" - a practical framework that helps practitioners analyze and secure against single-turn input attacks - and we'll examine the critical challenges in both red- and blue-teaming generative AI systems. Whether you’re a security expert or simply fascinated by the defense of...2024-10-0511 minAI Safety - Paper DigestAI Safety - Paper DigestThe Single-Turn Crescendo AttackIn this episode, we examine the cutting-edge adversarial strategy presented in "Well, that escalated quickly: The Single-Turn Crescendo Attack (STCA)." Building on the multi-turn crescendo attack method, STCA escalates context within a single, expertly crafted prompt, effectively breaching the safeguards of large language models (LLMs) like never before. We discuss how this method can bypass moderation filters in a single interaction, the implications of this for responsible AI (RAI), and what can be done to fortify defenses against such sophisticated exploits. Join us as we break down how a single, well-designed prompt can reveal deep vulnerabilities in current AI...2024-10-0406 minAI Safety - Paper DigestAI Safety - Paper DigestOutsmarting ChatGPT: The Power of Crescendo AttacksThis episode dives into how the Crescendo Multi-Turn Jailbreak Attack leverages seemingly benign prompts to escalate dialogues with large language models (LLMs) such as ChatGPT, Gemini, and Anthropic Chat, ultimately bypassing safety protocols to generate restricted content. The Crescendo attack begins with general questions and subtly manipulates the model’s responses, effectively bypassing traditional input filters, and shows a high success rate across popular LLMs. The discussion also covers the automated tool, Crescendomation, which surpasses other jailbreak methods, showcasing the vulnerability of AI to gradual escalation methods. Paper (preprint): Mark Russinovich, Ahmed Salem, Ronen Eldan. “Great, Now Writ...2024-10-0309 min