podcast
details
.com
Print
Share
Look for any podcast host, guest or anyone
Search
Showing episodes and shows of
Alan Aqrawi
Shows
AI Safety - Paper Digest
Anthropic's Best-of-N: Cracking Frontier AI Across Modalities
In this special christmas episode, we delve into "Best-of-N Jailbreaking," a powerful new black-box algorithm that demonstrates the vulnerabilities of cutting-edge AI systems. This approach works by sampling numerous augmented prompts - like shuffled or capitalized text - until a harmful response is elicited. Discover how Best-of-N (BoN) Jailbreaking achieves: 89% Attack Success Rates (ASR) on GPT-4o and 78% ASR on Claude 3.5 Sonnet with 10,000 prompts. Success in bypassing advanced defenses on both closed-source and open-source models. Cross-modality attacks on vision, audio, and multimodal AI systems like GPT-4o and Gemini 1.5 Pro. We’ll al...
2024-12-25
12 min
AI Safety - Paper Digest
Auto-Rewards & Multi-Step RL for Diverse AI Attacks by OpenAI
In this episode, we explore the latest advancements in automated red teaming from OpenAI, presented in the paper "Diverse and Effective Red Teaming with Auto-generated Rewards and Multi-step Reinforcement Learning." Automated red teaming has become essential for discovering rare failures and generating challenging test cases for large language models (LLMs). This paper tackles a core challenge: how to ensure attacks are both diverse and effective. We dive into their two-step approach: Generating Diverse Attack Goals using LLMs with tailored prompts and rule-based rewards (RBRs). Training an RL Attacker with multi-step reinforcement learning to optimize for both su...
2024-11-30
11 min
AI Safety - Paper Digest
Battle of the Scanners: Top Red Teaming Frameworks for LLMs
In this episode, we explore the findings from "Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis." As large language models (LLMs) are integrated into more applications, so do the security risks they pose, including information leaks and jailbreak attacks. This study examines four major open-source vulnerability scanners - Garak, Giskard, PyRIT, and CyberSecEval - evaluating their effectiveness and reliability in detecting these risks. We’ll discuss the unique features of each tool, uncover key gaps in their reliability, and share strategic recommendations for organizations looking to bolster their red-teaming efforts. Join us to understand how th...
2024-11-04
14 min
AI Safety - Paper Digest
Watermarking LLM Output: SynthID by DeepMind
In this episode, we delve into the groundbreaking watermarking technology presented in the paper "Scalable Watermarking for Identifying Large Language Model Outputs," published in Nature. SynthID-Text, a new watermarking scheme developed for large-scale production systems, preserves text quality while enabling high detection accuracy for synthetic content. We explore how this technology tackles the challenges of text watermarking without affecting LLM performance or training, and how it’s being implemented across millions of AI-generated outputs. Join us as we discuss how SynthID-Text could reshape the future of synthetic content detection and ensure responsible use of large language models. Pa...
2024-10-24
12 min
AI Safety - Paper Digest
Open Source Red Teaming: PyRIT by Microsoft
In this episode, we dive into PyRIT, the open-source toolkit developed by Microsoft for red teaming and security risk identification in generative AI systems. PyRIT offers a model-agnostic framework that enables red teamers to detect novel risks, harms, and jailbreaks in both single- and multi-modal AI models. We’ll explore how this cutting-edge tool is shaping the future of AI security and its practical applications in securing generative AI against emerging threats. Paper (preprint): Lopez Munoz, Gary D., et al. "PyRIT: A Framework for Security Risk Identification and Red Teaming in Generative AI Systems." (2024). arXiv. Di...
2024-10-08
10 min
AI Safety - Paper Digest
Jailbreaking GPT o1: STCA Attack
This podcast, "Jailbreaking GPT o1, " explores how the GPT o1 series, known for its advanced "slow-thinking" abilities, can be manipulated into generating disallowed content like hate speech through a novel attack method, the Single-Turn Crescendo Attack (STCA), which effectively bypasses GPT o1's safety protocols by leveraging the AI's learned language patterns and its step-by-step reasoning process. Paper (preprint): Aqrawi, Alan and Arian Abbasi. “Well, that escalated quickly: The Single-Turn Crescendo Attack (STCA).” (2024). TechRxiv. Disclaimer: This podcast was generated using Google's NotebookLM AI. While the summary aims to provide an overview, it is rec...
2024-10-07
08 min
AI Safety - Paper Digest
The Attack Atlas by IBM Research
This episode explores the intricate world of red-teaming generative AI models as discussed in the paper "Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI." We'll dive into the emerging vulnerabilities as LLMs are increasingly integrated into real-world applications and the evolving tactics of adversarial attacks. Our conversation will center around the "Attack Atlas" - a practical framework that helps practitioners analyze and secure against single-turn input attacks - and we'll examine the critical challenges in both red- and blue-teaming generative AI systems. Whether you’re a security expert or simply fascinated by the defense of...
2024-10-05
11 min
AI Safety - Paper Digest
The Single-Turn Crescendo Attack
In this episode, we examine the cutting-edge adversarial strategy presented in "Well, that escalated quickly: The Single-Turn Crescendo Attack (STCA)." Building on the multi-turn crescendo attack method, STCA escalates context within a single, expertly crafted prompt, effectively breaching the safeguards of large language models (LLMs) like never before. We discuss how this method can bypass moderation filters in a single interaction, the implications of this for responsible AI (RAI), and what can be done to fortify defenses against such sophisticated exploits. Join us as we break down how a single, well-designed prompt can reveal deep vulnerabilities in current AI...
2024-10-04
06 min
AI Safety - Paper Digest
Outsmarting ChatGPT: The Power of Crescendo Attacks
This episode dives into how the Crescendo Multi-Turn Jailbreak Attack leverages seemingly benign prompts to escalate dialogues with large language models (LLMs) such as ChatGPT, Gemini, and Anthropic Chat, ultimately bypassing safety protocols to generate restricted content. The Crescendo attack begins with general questions and subtly manipulates the model’s responses, effectively bypassing traditional input filters, and shows a high success rate across popular LLMs. The discussion also covers the automated tool, Crescendomation, which surpasses other jailbreak methods, showcasing the vulnerability of AI to gradual escalation methods. Paper (preprint): Mark Russinovich, Ahmed Salem, Ronen Eldan. “Great, Now Writ...
2024-10-03
09 min