Look for any podcast host, guest or anyone
Showing episodes and shows of

Soren Mindermann

Shows

AI Safety FundamentalsAI Safety FundamentalsThe Alignment Problem From a Deep Learning PerspectiveWithin the coming decades, artificial general intelligence (AGI) may surpass human capabilities at a wide range of important tasks. We outline a case for expecting that, without substantial effort to prevent it, AGIs could learn to pursue goals which are undesirable (i.e. misaligned) from a human perspective. We argue that if AGIs are trained in ways similar to today's most capable models, they could learn to act deceptively to receive higher reward, learn internally-represented goals which generalize beyond their training distributions, and pursue those goals using power-seeking strategies. We outline how the deployment of misaligned AGIs might irreversibly...2025-01-0433 minAI Safety FundamentalsAI Safety FundamentalsThe Alignment Problem From a Deep Learning PerspectiveWithin the coming decades, artificial general intelligence (AGI) may surpass human capabilities at a wide range of important tasks. We outline a case for expecting that, without substantial effort to prevent it, AGIs could learn to pursue goals which are undesirable (i.e. misaligned) from a human perspective. We argue that if AGIs are trained in ways similar to today's most capable models, they could learn to act deceptively to receive higher reward, learn internally-represented goals which generalize beyond their training distributions, and pursue those goals using power-seeking strategies. We outline how the deployment of misaligned AGIs might irreversibly...2025-01-0433 minPrioritätenPrioritätenSören Mindermann über das Problem der Ausrichtung Künstlicher IntelligenzSören Mindermann erklärt, wie moderne KI-Systeme funktionieren, wie Fehlausrichtungen dieser KI-Systeme (engl.: AI misalignment) entstehen können und welche Ansätze es gibt, um dem vorzubeugen.Für ein Transkript des Gesprächs mit weiterführenden Ressourcen siehe: https://prioritaeten-podcast.de/episode/soren-mindermann-uber-das-problem-der-ausrichtung-kunstlicher-intelligenz2023-09-051h 18TYPE III AUDIO (All episodes)TYPE III AUDIO (All episodes)[Week 3] "The alignment problem from a deep learning perspective" (Sections 2, 3 and 4) by Richard Ngo, Lawrence Chan & Sören Mindermann---client: agi_sfproject_id: core_readingsfeed_id: agi_sf__alignmentnarrator: pwqa: mdsqa_time: 1h00m---Within the coming decades, artificial general intelligence (AGI) may surpass human capabilities at a wide range of important tasks. We outline a case for expecting that, without substantial effort to prevent it, AGIs could learn to pursue goals which are undesirable (i.e. misaligned) from a human perspective. We argue that if AGIs are trained in ways similar to today's most capable models, they could learn to act...2023-03-2733 minThe Turing PodcastThe Turing PodcastCovid lockdowns: which policies worked best?This week on the podcast, the hosts are joined by Sören Mindermann & Mrinank Sharma who are PhD students from Oxford University. Mrinank works as part of Oxford's Future of Humanity Institute, whilst Sören is a member of Oxford Applied and Theoretical Machine Learning Group and the episode focuses on the research they've recently had published on inferring the effectiveness of government interventions against Covid-19, during the first wave of the pandemic in 2020. You can find the research article for this work here: https://science.sciencemag.org/content/371/6531/eabd9338 2021-04-281h 07