podcast
details
.com
Print
Share
Look for any podcast host, guest or anyone
Search
Showing episodes and shows of
Bluedot
Shows
AI Safety Fundamentals
The Project: Situational Awareness
By Leopold AschenbrennerA former OpenAI researcher argues that private AI companies cannot safely develop superintelligence due to security vulnerabilities and competitive pressures that override safety. He argues that a government-led 'AGI Project' is inevitable and necessary to prevent adversaries stealing the AI systems, or losing human control over the technology.Source:https://situational-awareness.ai/the-project/?utm_source=bluedot-impactA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2025-09-18
32 min
AI Safety Fundamentals
Introduction to AI Control
By Sarah Hastings-WoodhouseAI Control is a research agenda that aims to prevent misaligned AI systems from causing harm. It is different from AI alignment, which aims to ensure that systems act in the best interests of their users. Put simply, aligned AIs do not want to harm humans, whereas controlled AIs can't harm humans, even if they want to.Source:https://bluedot.org/blog/ai-controlA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2025-09-18
10 min
AI Safety Fundamentals
Resilience and Adaptation to Advanced AI
By Jamie BernardiJamie Bernardi argues that we can't rely solely on model safeguards to ensure AI safety. Instead, he proposes "AI resilience": building society's capacity to detect misuse, defend against harmful AI applications, and reduce the damage caused when dangerous AI capabilities spread beyond a government or company's control.Source: https://airesilience.substack.com/p/resilience-and-adaptation-to-advanced?utm_source=bluedot-impactA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2025-09-18
13 min
AI Safety Fundamentals
The Intelligence Curse
By Luke Drago and Rudolf LaineThis section explores how the arrival of AGI could trigger an “intelligence curse,” where automation of all work removes incentives for states and companies to care about ordinary people. It frames the trillion-dollar race toward AGI as not just an economic shift, but a transformation in power dynamics and human relevance.Source:https://intelligence-curse.ai/?utm_source=bluedot-impactA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2025-09-18
2h 19
AI Safety Fundamentals
Why Do People Disagree About When Powerful AI Will Arrive?
By Sarah Hastings-WoodhouseMost experts agree that AGI is possible. They also agree that it will have transformative consequences. There is less consensus about what these consequences will be. Some believe AGI will usher in an age of radical abundance. Others believe it will likely lead to human extinction. One thing we can be sure of is that a post-AGI world would look very different to the one we live in today. So, is AGI just around the corner? Or are there still hard problems in front of us that will take decades to crack, despite the...
2025-09-10
22 min
AI Safety Fundamentals
AI Safety via Red Teaming Language Models With Language Models
Abstract: Language Models (LMs) often cannot be deployed because of their potential to harm users in ways that are hard to predict in advance. Prior work identifies harmful behaviors before deployment by using human annotators to hand-write test cases. However, human annotation is expensive, limiting the number and diversity of test cases. In this work, we automatically find cases where a target LM behaves in a harmful way, by generating test cases (“red teaming”) using another LM. We evaluate the target LM’s replies to generated test questions using a classifier trained to detect offensive content...
2025-01-04
06 min
AI Safety Fundamentals
Where I Agree and Disagree with Eliezer
(Partially in response to AGI Ruin: A list of Lethalities. Written in the same rambling style. Not exhaustive.)Agreements Powerful AI systems have a good chance of deliberately and irreversibly disempowering humanity. This is a much easier failure mode than killing everyone with destructive physical technologies. Catastrophically risky AI systems could plausibly exist soon, and there likely won’t be a strong consensus about this fact until such systems pose a meaningful existential risk per year. There is not necessarily any “fire alarm.” Even if there were consensus about a risk from powerful AI systems, there is a good...
2025-01-04
42 min
AI Safety Fundamentals
AI Safety via Debate
Abstract:To make AI systems broadly useful for challenging real-world tasks, we need them to learn complex human goals and preferences. One approach to specifying complex goals asks humans to judge during training which agent behaviors are safe and useful, but this approach can fail if the task is too complicated for a human to directly judge. To help address this concern, we propose training agents via self play on a zero sum debate game. Given a question or proposed action, two agents take turns making short statements up to a limit, then a human...
2025-01-04
39 min
AI Safety Fundamentals
Least-To-Most Prompting Enables Complex Reasoning in Large Language Models
Chain-of-thought prompting has demonstrated remarkable performance on various natural language reasoning tasks. However, it tends to perform poorly on tasks which requires solving problems harder than the exemplars shown in the prompts. To overcome this challenge of easy-to-hard generalization, we propose a novel prompting strategy, least-to-most prompting. The key idea in this strategy is to break down a complex problem into a series of simpler subproblems and then solve them in sequence. Solving each subproblem is facilitated by the answers to previously solved subproblems. Our experimental results on tasks related to symbolic manipulation, compositional generalization, and math reasoning reveal...
2025-01-04
16 min
AI Safety Fundamentals
Summarizing Books With Human Feedback
To safely deploy powerful, general-purpose artificial intelligence in the future, we need to ensure that machine learning models act in accordance with human intentions. This challenge has become known as the alignment problem.A scalable solution to the alignment problem needs to work on tasks where model outputs are difficult or time-consuming for humans to evaluate. To test scalable alignment techniques, we trained a model to summarize entire books, as shown in the following samples.Source:https://openai.com/research/summarizing-books
2025-01-04
06 min
AI Safety Fundamentals
Supervising Strong Learners by Amplifying Weak Experts
Abstract: Many real world learning tasks involve complex or hard-to-specify objectives, and using an easier-to-specify proxy can lead to poor performance or misaligned behavior. One solution is to have humans provide a training signal by demonstrating or judging performance, but this approach fails if the task is too complicated for a human to directly evaluate. We propose Iterated Amplification, an alternative training strategy which progressively builds up a training signal for difficult problems by combining solutions to easier subproblems. Iterated Amplification is closely related to Expert Iteration (Anthony et al., 2017; Silver et al., 2017), except that i...
2025-01-04
19 min
AI Safety Fundamentals
Measuring Progress on Scalable Oversight for Large Language Models
Abstract: Developing safe and useful general-purpose AI systems will require us to make progress on scalable oversight: the problem of supervising systems that potentially outperform us on most skills relevant to the task at hand. Empirical work on this problem is not straightforward, since we do not yet have systems that broadly exceed our abilities. This paper discusses one of the major ways we think about this problem, with a focus on ways it can be studied empirically. We first present an experimental design centered on tasks for which human specialists succeed but unaided humans a...
2025-01-04
09 min
AI Safety Fundamentals
Is Power-Seeking AI an Existential Risk?
This report examines what I see as the core argument for concern about existential risk from misaligned artificial intelligence. I proceed in two stages. First, I lay out a backdrop picture that informs such concern. On this picture, intelligent agency is an extremely powerful force, and creating agents much more intelligent than us is playing with fire -- especially given that if their objectives are problematic, such agents would plausibly have instrumental incentives to seek power over humans. Second, I formulate and evaluate a more specific six-premise argument that creating agents of this kind will lead to existential catastrophe...
2025-01-04
3h 21
AI Safety Fundamentals
Yudkowsky Contra Christiano on AI Takeoff Speeds
In 2008, thousands of blog readers - including yours truly, who had discovered the rationality community just a few months before - watched Robin Hanson debate Eliezer Yudkowsky on the future of AI.Robin thought the AI revolution would be a gradual affair, like the Agricultural or Industrial Revolutions. Various people invent and improve various technologies over the course of decades or centuries. Each new technology provides another jumping-off point for people to use when inventing other technologies: mechanical gears → steam engine → railroad and so on. Over the course of a few decades, you’ve invented lots of stuff...
2025-01-04
1h 02
AI Safety Fundamentals
Why AI Alignment Could Be Hard With Modern Deep Learning
Why would we program AI that wants to harm us? Because we might not know how to do otherwise.Source:https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/Crossposted from the Cold Takes Audio podcast.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2025-01-04
28 min
AI Safety Fundamentals
AGI Ruin: A List of Lethalities
I have several times failed to write up a well-organized list of reasons why AGI will kill you. People come in with different ideas about why AGI would be survivable, and want to hear different obviously key points addressed first. Some fraction of those people are loudly upset with me if the obviously most important points aren't addressed immediately, and I address different points first instead.Having failed to solve this problem in any good way, I now give up and solve it poorly with a poorly organized list of individual rants. I'm not particularly happy with...
2025-01-04
1h 01
AI Safety Fundamentals
Feature Visualization
There is a growing sense that neural networks need to be interpretable to humans. The field of neural network interpretability has formed in response to these concerns. As it matures, two major threads of research have begun to coalesce: feature visualization and attribution. This article focuses on feature visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks...
2025-01-04
31 min
AI Safety Fundamentals
Robust Feature-Level Adversaries Are Interpretability Tools
Abstract: The literature on adversarial attacks in computer vision typically focuses on pixel-level perturbations. These tend to be very difficult to interpret. Recent work that manipulates the latent representations of image generators to create "feature-level" adversarial perturbations gives us an opportunity to explore perceptible, interpretable adversarial attacks. We make three contributions. First, we observe that feature-level attacks provide useful classes of inputs for studying representations in models. Second, we show that these adversaries are uniquely versatile and highly robust. We demonstrate that they can be used to produce targeted, universal, disguised, physically-realizable, and black-box attacks a...
2025-01-04
35 min
AI Safety Fundamentals
Debate Update: Obfuscated Arguments Problem
This is an update on the work on AI Safety via Debate that we previously wrote about here. What we did: We tested the debate protocol introduced in AI Safety via Debate with human judges and debaters. We found various problems and improved the mechanism to fix these issues (details of these are in the appendix). However, we discovered that a dishonest debater can often create arguments that have a fatal error, but where it is very hard to locate the error. We don’t have a fix for this “obfuscated argument” problem, and beli...
2025-01-04
28 min
AI Safety Fundamentals
Introduction to Logical Decision Theory for Computer Scientists
Decision theories differ on exactly how to calculate the expectation--the probability of an outcome, conditional on an action. This foundational difference bubbles up to real-life questions about whether to vote in elections, or accept a lowball offer at the negotiating table. When you're thinking about what happens if you don't vote in an election, should you calculate the expected outcome as if only your vote changes, or as if all the people sufficiently similar to you would also decide not to vote? Questions like these belong to a larger class of problems, Newcomblike decision problems, in which some other a...
2025-01-04
14 min
AI Safety Fundamentals
High-Stakes Alignment via Adversarial Training [Redwood Research Report]
(Update: We think the tone of this post was overly positive considering our somewhat weak results. You can read our latest post with more takeaways and followup results here.) This post motivates and summarizes this paper from Redwood Research, which presents results from the project first introduced here. We used adversarial training to improve high-stakes reliability in a task (“filter all injurious continuations of a story”) that we think is analogous to work that future AI safety engineers will need to do to reduce the risk of AI takeover. We experimented with three classes of adversaries – unaugmented humans...
2025-01-04
19 min
AI Safety Fundamentals
Takeaways From Our Robust Injury Classifier Project [Redwood Research]
With the benefit of hindsight, we have a better sense of our takeaways from our first adversarial training project (paper). Our original aim was to use adversarial training to make a system that (as far as we could tell) never produced injurious completions. If we had accomplished that, we think it would have been the first demonstration of a deep learning system avoiding a difficult-to-formalize catastrophe with an ultra-high level of reliability. Presumably, we would have needed to invent novel robustness techniques that could have informed techniques useful for aligning TAI. With a successful system, we also could have...
2025-01-04
12 min
AI Safety Fundamentals
Acquisition of Chess Knowledge in Alphazero
Abstract:What is learned by sophisticated neural network agents such as AlphaZero? This question is of both scientific and practical interest. If the representations of strong neural networks bear no resemblance to human concepts, our ability to understand faithful explanations of their decisions will be restricted, ultimately limiting what we can achieve with neural network interpretability. In this work we provide evidence that human knowledge is acquired by the AlphaZero neural network as it trains on the game of chess. By probing for a broad range of human chess concepts we show when and where...
2025-01-04
22 min
AI Safety Fundamentals
Progress on Causal Influence Diagrams
By Tom Everitt, Ryan Carey, Lewis Hammond, James Fox, Eric Langlois, and Shane LeggAbout 2 years ago, we released the first few papers on understanding agent incentives using causal influence diagrams. This blog post will summarize progress made since then. What are causal influence diagrams? A key problem in AI alignment is understanding agent incentives. Concerns have been raised that agents may be incentivized to avoid correction, manipulate users, or inappropriately influence their learning. This is particularly worrying as training schemes often shape incentives in subtle and surprising ways. For these reasons, we’re developing a formal th...
2025-01-04
23 min
AI Safety Fundamentals
Careers in Alignment
Richard Ngo compiles a number of resources for thinking about careers in alignment research.Original text:https://docs.google.com/document/d/1iFszDulgpu1aZcq_aYFG7Nmcr5zgOhaeSwavOMk1akw/edit#heading=h.4whc9v22p7tbNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2025-01-04
07 min
AI Safety Fundamentals
Cooperation, Conflict, and Transformative Artificial Intelligence: Sections 1 & 2 — Introduction, Strategy and Governance
Transformative artificial intelligence (TAI) may be a key factor in the long-run trajectory of civilization. A growing interdisciplinary community has begun to study how the development of TAI can be made safe and beneficial to sentient life (Bostrom 2014; Russell et al., 2015; OpenAI, 2018; Ortega and Maini, 2018; Dafoe, 2018). We present a research agenda for advancing a critical component of this effort: preventing catastrophic failures of cooperation among TAI systems. By cooperation failures we refer to a broad class of potentially-catastrophic inefficiencies in interactions among TAI-enabled actors. These include destructive conflict; coercion; and social dilemmas (Kollock, 1998; Macy and Flache, 2002) which destroy value...
2025-01-04
27 min
AI Safety Fundamentals
Logical Induction (Blog Post)
MIRI is releasing a paper introducing a new model of deductively limited reasoning: “Logical induction,” authored by Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, myself, and Jessica Taylor. Readers may wish to start with the abridged version. Consider a setting where a reasoner is observing a deductive process (such as a community of mathematicians and computer programmers) and waiting for proofs of various logical claims (such as the abc conjecture, or “this computer program has a bug in it”), while making guesses about which claims will turn out to be true. Roughly speaking, our paper presents a computable (though in...
2025-01-04
11 min
AI Safety Fundamentals
Embedded Agents
Suppose you want to build a robot to achieve some real-world goal for you—a goal that requires the robot to learn for itself and figure out a lot of things that you don’t already know. There’s a complicated engineering problem here. But there’s also a problem of figuring out what it even means to build a learning agent like that. What is it to optimize realistic goals in physical environments? In broad terms, how does it work? In this series of posts, I’ll point to four ways we don’t currently know how it works, and f...
2025-01-04
17 min
AI Safety Fundamentals
Understanding Intermediate Layers Using Linear Classifier Probes
Abstract:Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Re...
2025-01-04
16 min
AI Safety Fundamentals
Four Background Claims
MIRI’s mission is to ensure that the creation of smarter-than-human artificial intelligence has a positive impact. Why is this mission important, and why do we think that there’s work we can do today to help ensure any such thing? In this post and my next one, I’ll try to answer those questions. This post will lay out what I see as the four most important premises underlying our mission. Related posts include Eliezer Yudkowsky’s “Five Theses” and Luke Muehlhauser’s “Why MIRI?”; this is my attempt to make explicit the claims that are in the background wheneve...
2025-01-04
15 min
AI Safety Fundamentals
The Easy Goal Inference Problem Is Still Hard
One approach to the AI control problem goes like this:Observe what the user of the system says and does.Infer the user’s preferences.Try to make the world better according to the user’s preference, perhaps while working alongside the user and asking clarifying questions.This approach has the major advantage that we can begin empirical work today — we can actually build systems which observe user behavior, try to figure out what the user wants, and then help with that. There are many applications that people care about already, and we can set to work on mak...
2025-01-04
07 min
AI Safety Fundamentals
Superintelligence: Instrumental Convergence
According to the orthogonality thesis, intelligent agents may have an enormous range of possible final goals. Nevertheless, according to what we may term the “instrumental convergence” thesis, there are some instrumental goals likely to be pursued by almost any intelligent agent, because there are some objectives that are useful intermediaries to the achievement of almost any final goal. We can formulate this thesis as follows:The instrumental convergence thesis:"Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide...
2025-01-04
17 min
AI Safety Fundamentals
Specification Gaming: The Flip Side of AI Ingenuity
Specification gaming is a behaviour that satisfies the literal specification of an objective without achieving the intended outcome. We have all had experiences with specification gaming, even if not by this name. Readers may have heard the myth of King Midas and the golden touch, in which the king asks that anything he touches be turned to gold - but soon finds that even food and drink turn to metal in his hands. In the real world, when rewarded for doing well on a homework assignment, a student might copy another student to get the right answers, rather than...
2025-01-04
13 min
AI Safety Fundamentals
Learning From Human Preferences
One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better.Original article:https://openai.com/research/learning-from-human-preferencesAuthors:Dario Amodei, Paul Christiano, Alex RayA podcast by B...
2025-01-04
06 min
AI Safety Fundamentals
What Failure Looks Like
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.The stereotyped image of AI catastrophe is a powerful, malicious AI system that takes its creators by surprise and quickly achieves a decisive advantage over the rest of humanity.I think this is probably not what failure will look like, and I want to try to paint a more realistic picture. I’ll tell the story in two parts:Part I: machine learning will increase our ability to “get what we can measure,” which could cause a slow-rolling catastrophe. ("Going out wi...
2025-01-04
18 min
AI Safety Fundamentals
Deceptively Aligned Mesa-Optimizers: It’s Not Funny if I Have to Explain It
Our goal here is to popularize obscure and hard-to-understand areas of AI alignment.So let’s try to understand the incomprehensible meme! Our main source will be Hubinger et al 2019, Risks From Learned Optimization In Advanced Machine Learning Systems.Mesa- is a Greek prefix which means the opposite of meta-. To “go meta” is to go one level up; to “go mesa” is to go one level down (nobody has ever actually used this expression, sorry). So a mesa-optimizer is an optimizer one level down from you.Consider evolution, optimizing the fitness of...
2025-01-04
26 min
AI Safety Fundamentals
The Alignment Problem From a Deep Learning Perspective
Within the coming decades, artificial general intelligence (AGI) may surpass human capabilities at a wide range of important tasks. We outline a case for expecting that, without substantial effort to prevent it, AGIs could learn to pursue goals which are undesirable (i.e. misaligned) from a human perspective. We argue that if AGIs are trained in ways similar to today's most capable models, they could learn to act deceptively to receive higher reward, learn internally-represented goals which generalize beyond their training distributions, and pursue those goals using power-seeking strategies. We outline how the deployment of misaligned AGIs might irreversibly...
2025-01-04
33 min
AI Safety Fundamentals
AGI Safety From First Principles
This report explores the core case for why the development of artificial general intelligence (AGI) might pose an existential threat to humanity. It stems from my dissatisfaction with existing arguments on this topic: early work is less relevant in the context of modern machine learning, while more recent work is scattered and brief. This report aims to fill that gap by providing a detailed investigation into the potential risk from AGI misbehaviour, grounded by our current knowledge of machine learning, and highlighting important uncertain ties. It identifies four key premises, evaluates existing arguments about them, and outlines some novel...
2025-01-04
13 min
AI Safety Fundamentals
ML Systems Will Have Weird Failure Modes
Previously, I've argued that future ML systems might exhibit unfamiliar, emergent capabilities, and that thought experiments provide one approach towards predicting these capabilities and their consequences. In this post I’ll describe a particular thought experiment in detail. We’ll see that taking thought experiments seriously often surfaces future risks that seem "weird" and alien from the point of view of current systems. I’ll also describe how I tend to engage with these thought experiments: I usually start out intuitively skeptical, but when I reflect on emergent behavior I find that some (but not all) of the skepticism goes a...
2025-01-04
13 min
AI Safety Fundamentals
Goal Misgeneralisation: Why Correct Specifications Aren’t Enough for Correct Goals
As we build increasingly advanced AI systems, we want to make sure they don’t pursue undesired goals. This is the primary concern of the AI alignment community. Undesired behaviour in an AI agent is often the result of specification gaming —when the AI exploits an incorrectly specified reward. However, if we take on the perspective of the agent we’re training, we see other reasons it might pursue undesired goals, even when trained with a correct specification. Imagine that you are the agent (the blue blob) being trained with reinforcement learning (RL) in the following 3D environment: The enviro...
2025-01-04
17 min
AI Safety Fundamentals
Thought Experiments Provide a Third Anchor
Previously, I argued that we should expect future ML systems to often exhibit "emergent" behavior, where they acquire new capabilities that were not explicitly designed or intended, simply as a result of scaling. This was a special case of a general phenomenon in the physical sciences called More Is Different. I care about this because I think AI will have a huge impact on society, and I want to forecast what future systems will be like so that I can steer things to be better. To that end, I find More Is Different to be troubling and disorienting. I’m...
2025-01-04
08 min
AI Safety Fundamentals
Biological Anchors: A Trick That Might Or Might Not Work
I've been trying to review and summarize Eliezer Yudkowksy's recent dialogues on AI safety. Previously in sequence: Yudkowsky Contra Ngo On Agents. Now we’re up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra's talking about and what's going on.The Open Philanthropy Project ("Open Phil") is a big effective altruist foundation interested in funding AI safety. It's got $20 billion, probably the majority of money in the field, so its decisions matter a lot and it’s very invested in getting things right. In 2020, it asked seni...
2025-01-04
1h 10
AI Safety Fundamentals
A Short Introduction to Machine Learning
Despite the current popularity of machine learning, I haven’t found any short introductions to it which quite match the way I prefer to introduce people to the field. So here’s my own. Compared with other introductions, I’ve focused less on explaining each concept in detail, and more on explaining how they relate to other important concepts in AI, especially in diagram form. If you're new to machine learning, you shouldn't expect to fully understand most of the concepts explained here just after reading this post - the goal is instead to provide a broad framework which will c...
2025-01-04
17 min
AI Safety Fundamentals
What is AI Alignment?
To solve rogue AIs, we’ll have to align them. In this article by Adam Jones of BlueDot Impact, Jones introduces the concept of aligning AIs. He defines alignment as “making AI systems try to do what their creators intend them to do.” Original text:https://aisafetyfundamentals.com/blog/what-is-ai-alignment/Author:Adam JonesA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-05-01
11 min
AI Safety Fundamentals
What risks does AI pose?
This resource, written by Adam Jones at BlueDot Impact, provides a comprehensive overview of the existing and anticipated risks of AI. As you're going through the reading, consider what different futures might look like should different combinations of risks materialize.Original text:https://aisafetyfundamentals.com/blog/ai-risks/Author:Adam JonesA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-04-23
24 min
AI Safety Fundamentals
“[Week 3] Compilation: Why Might Misaligned, Advanced AI Cause Catastrophe?” by BlueDot Impact
You may have seen arguments (such as these) for why people might create and deploy advanced AI that is both power-seeking and misaligned with human i…--- First published: August 2nd, 2023 Source: https://www.lesswrong.com/posts/sK7bsaNrghfEjttRs/compilation-why-might-misaligned-advanced-ai-cause --- Narrated by TYPE III AUDIO. Share feedback on this narration.
2023-08-02
18 min
AI Safety Fundamentals
Visualizing the Deep Learning Revolution
The field of AI has undergone a revolution over the last decade, driven by the success of deep learning techniques. This post aims to convey three ideas using a series of illustrative examples:There have been huge jumps in the capabilities of AIs over the last decade, to the point where it’s becoming hard to specify tasks that AIs can’t do.This progress has been primarily driven by scaling up a handful of relatively simple algorithms (rather than by developing a more principled or scientific understanding of deep learning).Very few people predicted that progress would be a...
2023-05-13
41 min
AI Safety Fundamentals
A Short Introduction to Machine Learning
Despite the current popularity of machine learning, I haven’t found any short introductions to it which quite match the way I prefer to introduce people to the field. So here’s my own. Compared with other introductions, I’ve focused less on explaining each concept in detail, and more on explaining how they relate to other important concepts in AI, especially in diagram form. If you're new to machine learning, you shouldn't expect to fully understand most of the concepts explained here just after reading this post - the goal is instead to provide a broad framework which will c...
2023-05-13
17 min
AI Safety Fundamentals
The AI Triad and What It Means for National Security Strategy
A single sentence can summarize the complexities of modern artificial intelligence: Machine learning systems use computing power to execute algorithms that learn from data. Everything that national security policymakers truly need to know about a technology that seems simultaneously trendy, powerful, and mysterious is captured in those 13 words. They specify a paradigm for modern AI—machine learning—in which machines draw their own insights from data, unlike the human-driven expert systems of the past. The same sentence also introduces the AI triad of algorithms, data, and computing power. Each element is vital to the power...
2023-05-13
27 min
AI Safety Fundamentals
As AI Agents Like Auto-GPT Speed up Generative AI Race, We All Need to Buckle Up
If you thought the pace of AI development had sped up since the release of ChatGPT last November, well, buckle up. Thanks to the rise of autonomous AI agents like Auto-GPT, BabyAGI and AgentGPT over the past few weeks, the race to get ahead in AI is just getting faster. And, many experts say, more concerning.Source:https://venturebeat.com/ai/as-ai-agents-like-auto-gpt-speed-up-generative-ai-race-we-all-need-to-buckle-up-the-ai-beat/Narrated for AI Safety Fundamentals by TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more...
2023-05-13
07 min
AI Safety Fundamentals
Specification Gaming: The Flip Side of AI Ingenuity
Specification gaming is a behaviour that satisfies the literal specification of an objective without achieving the intended outcome. We have all had experiences with specification gaming, even if not by this name. Readers may have heard the myth of King Midas and the golden touch, in which the king asks that anything he touches be turned to gold - but soon finds that even food and drink turn to metal in his hands. In the real world, when rewarded for doing well on a homework assignment, a student might copy another student to get the right answers, rather than...
2023-05-13
13 min
AI Safety Fundamentals
The Need for Work on Technical AI Alignment
This page gives an overview of the alignment problem. It describes our motivation for running courses about technical AI alignment. The terminology should be relatively broadly accessible (not assuming any previous knowledge of AI alignment or much knowledge of AI/computer science).This piece describes the basic case for AI alignment research, which is research that aims to ensure that advanced AI systems can be controlled or guided towards the intended goals of their designers. Without such work, advanced AI systems could potentially act in ways that are severely at odds with their designers’ in...
2023-05-13
34 min
AI Safety Fundamentals
Overview of How AI Might Exacerbate Long-Running Catastrophic Risks
Developments in AI could exacerbate long-running catastrophic risks, including bioterrorism, disinformation and resulting institutional dysfunction, misuse of concentrated power, nuclear and conventional war, other coordination failures, and unknown risks. This document compiles research on how AI might raise these risks. (Other material in this course discusses more novel risks from AI.) We draw heavily from previous overviews by academics, particularly Dafoe (2020) and Hendrycks et al. (2023).Source:https://aisafetyfundamentals.com/governance-blog/overview-of-ai-risk-exacerbationNarrated for AI Safety...
2023-05-13
24 min
AI Safety Fundamentals
Avoiding Extreme Global Vulnerability as a Core AI Governance Problem
Much has been written framing and articulating the AI governance problem from a catastrophic risks lens, but these writings have been scattered. This page aims to provide a synthesized introduction to some of these already prominent framings. This is just one attempt at suggesting an overall frame for thinking about some AI governance problems; it may miss important things. Some researchers think that unsafe development or misuse of AI could cause massive harms. A key contributor to some of these risks is that catastrophe may not require all or most relevant decision makers to make harmful decisions. Instead, harmful...
2023-05-13
11 min
AI Safety Fundamentals
Nobody’s on the Ball on AGI Alignment
Observing from afar, it’s easy to think there’s an abundance of people working on AGI safety. Everyone on your timeline is fretting about AI risk, and it seems like there is a well-funded EA-industrial-complex that has elevated this to their main issue. Maybe you’ve even developed a slight distaste for it all—it reminds you a bit too much of the woke and FDA bureaucrats, and Eliezer seems pretty crazy to you.That’s what I used to think too, a couple of years ago. Then I got to see things more up close. And here’s...
2023-05-13
17 min
AI Safety Fundamentals
AI Safety Seems Hard to Measure
In previous pieces, I argued that there’s a real and large risk of AI systems’ developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening. A young, growing field of AI safety research tries to reduce this risk, by finding ways to ensure that AI systems behave as intended (rather than forming ambitious aims of their own and deceiving and manipulating humans as needed to accomplish them).Maybe we’ll succeed in reducing the risk, and maybe...
2023-05-13
22 min
AI Safety Fundamentals
Why Might Misaligned, Advanced AI Cause Catastrophe?
You may have seen arguments (such as these) for why people might create and deploy advanced AI that is both power-seeking and misaligned with human interests. This may leave you thinking, “OK, but would such AI systems really pose catastrophic threats?” This document compiles arguments for the claim that misaligned, power-seeking, advanced AI would pose catastrophic risks.We’ll see arguments for the following claims, which are mostly separate/independent reasons for concern:Humanity’s past holds concerning analogiesAI systems have some major inherent advantages over humansAIs could come to...
2023-05-13
20 min
AI Safety Fundamentals
Emergent Deception and Emergent Optimization
I’ve previously argued that machine learning systems often exhibit emergent capabilities, and that these capabilities could lead to unintended negative consequences. But how can we reason concretely about these consequences? There’s two principles I find useful for reasoning about future emergent capabilities:If a capability would help get lower training loss, it will likely emerge in the future, even if we don’t observe much of it now.As ML models get larger and are trained on more and better data, simpler heuristics will tend to get replaced by more complex heuristics.Using...
2023-05-13
33 min
AI Safety Fundamentals
Frontier AI Regulation: Managing Emerging Risks to Public Safety
Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term “frontier AI” models — highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety. Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and, it is difficult to stop a model’s capabilities from proliferating broadly. To address these challenges, at least three building blocks for the regulation of front...
2023-05-13
29 min
AI Safety Fundamentals
Model Evaluation for Extreme Risks
Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why model evaluation is critical for addressing extreme risks. Developers must be able to identify dangerous capabilities (through “dangerous capability evaluations”) and the propensity of models to apply their capabilities for harm (through “alignment evaluations”). These evaluations will become critical for keeping policymakers and other stakeholders informed, and for making responsible decisions about model training, deployment, and security.
2023-05-13
56 min
AI Safety Fundamentals
Primer on Safety Standards and Regulations for Industrial-Scale AI Development
This primer introduces various aspects of safety standards and regulations for industrial-scale AI development: what they are, their potential and limitations, some proposals for their substance, and recent policy developments. Key points are:Standards are formal specifications of best practices, which can influence regulations. Regulations are requirements established by governments.Cutting-edge AI development is being done with individual companies spending over $100 million dollars. This industrial scale may enable narrowly targeted and enforceable regulation to reduce the risks of cutting-edge AI development.Regulation of industrial-scale AI development faces various...
2023-05-13
15 min
AI Safety Fundamentals
Racing Through a Minefield: The AI Deployment Problem
Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?Source:https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/Crossposted from the Cold Takes Audio podcast.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2023-05-13
21 min
AI Safety Fundamentals
Choking off China’s Access to the Future of AI
Introduction On October 7, 2022, the Biden administration announced a new export controls policy on artificial intelligence (AI) and semiconductor technologies to China. These new controls—a genuine landmark in U.S.-China relations—provide the complete picture after a partial disclosure in early September generated confusion. For weeks the Biden administration has been receiving criticism in many quarters for a new round of semiconductor export control restrictions, first disclosed on September 1. The restrictions block leading U.S. AI computer chip designers, such as Nvidia and AMD, from selling their high-end chips for AI and supercomputing to China. The crit...
2023-05-13
07 min
AI Safety Fundamentals
The State of AI in Different Countries — An Overview
Some are concerned that regulating AI progress in one country will slow that country down, putting it at a disadvantage in a global AI arms race. Many proponents of AI regulation disagree; they have pushed back on the overall framework, pointed out serious drawbacks and limitations of racing, and argued that regulations do not have to slow progress down. Another disagreement is about whether countries are in fact in a neck and neck arms race; some believe that the United States and its allies have a significant lead which would allow for regulation even if t...
2023-05-13
36 min
AI Safety Fundamentals
Primer on AI Chips and AI Governance
If governments could regulate the large-scale use of “AI chips,” that would likely enable governments to govern frontier AI development—to decide who does it and under what rules.In this article, we will use the term “AI chips” to refer to cutting-edge, AI-specialized computer chips (such as NVIDIA’s A100 and H100 or Google’s TPUv4).Frontier AI models like GPT-4 are already trained using tens of thousands of AI chips, and trends suggest that more advanced AI will require even more computing power.Source: https://ai...
2023-05-13
25 min
AI Safety Fundamentals
A Tour of Emerging Cryptographic Technologies
Historically, progress in the field of cryptography has been enormously consequential. Over the past century, for instance, cryptographic discoveries have played a key role in a world war and made it possible to use the internet for business and private communication. In the interest of exploring the impact the field may have in the future, I consider a suite of more recent developments. My primary focus is on blockchain-based technologies (such as cryptocurrencies) and on techniques for computing on confidential data (such as secure multiparty computation). I provide an introduction to these technologies that assumes no mathematical background or...
2023-05-13
30 min
AI Safety Fundamentals
Historical Case Studies of Technology Governance and International Agreements
The following excerpts summarize historical case studies that are arguably informative for AI governance. The case studies span nuclear arms control, militaries’ adoption of electricity, and environmental agreements. (For ease of reading, we have edited the formatting of the following excerpts and added bolding.)Source:https://aisafetyfundamentals.com/governance-blog/historical-case-studiesNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2023-05-13
35 min
AI Safety Fundamentals
What Does It Take to Catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring
As advanced machine learning systems’ capabilities begin to play a significant role in geopolitics and societal order, it may become imperative that (1) governments be able to enforce rules on the development of advanced ML systems within their borders, and (2) countries be able to verify each other’s compliance with potential future international agreements on advanced ML development. This work analyzes one mechanism to achieve this, by monitoring the computing hardware used for large-scale NN training. The framework’s primary goal is to provide governments high confidence that no actor uses large quantities of specialized ML chips to execute a traini...
2023-05-13
32 min
AI Safety Fundamentals
OpenAI Charter
Our Charter describes the principles we use to execute on OpenAI’s mission. ---Source: https://openai.com/charter--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2023-05-13
02 min
AI Safety Fundamentals
LP Announcement by OpenAI
We’ve created OpenAI LP, a new “capped-profit” company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission. The original text contained 1 footnote which was omitted from this narration.---Source: https://openai.com/blog/openai-lp--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2023-05-13
06 min
AI Safety Fundamentals
Let’s Think About Slowing Down AI
If you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous. The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and...
2023-05-13
1h 14
AI Safety Fundamentals
International Institutions for Advanced AI
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity. International collaborations can unlock AI’s ability to further sustainable development, and coordination of regulatory efforts can reduce obstacles to innovation and the spread of benefits. Conversely, the potential dangerous capabilities of powerful and general-purpose AI systems create global externalities in their development and deployment, and international efforts to further responsible AI practices could help manage the risks they pose. This paper identifies a set of governance functions that could be performed at an international level to address these challenges, ranging from supporting ac...
2023-05-13
42 min
AI Safety Fundamentals
What AI Companies Can Do Today to Help With the Most Important Century
I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread and how to help via full-time work.This piece is about what major AI companies can do (and not do) to be helpful. By “major AI companies,” I mean the sorts of AI companies that are advancing the state of the art, and/or could play a major role in how very powerful AI systems end up getting used.1This piece could be useful to people who work at tho...
2023-05-13
18 min
AI Safety Fundamentals
12 Tentative Ideas for Us AI Policy
About two years ago, I wrote that “it’s difficult to know which ‘intermediate goals’ [e.g. policy goals] we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI.” Much has changed since then, and in this post I give an update on 12 ideas for US policy goals[1]Many […] The original text contained 7 footnotes which were omitted from this narration.---Source: https://www.openphilanthropy.org/research/12-tentative-ideas-for-us-ai-policy--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact.Learn...
2023-05-13
09 min
AI Safety Fundamentals
Career Resources on AI Strategy Research
(Last updated August 31, 2022) Summary and Introduction One potential way to improve the impacts of AI is helping various actors figure out good AI strategies—that is, good high-level plans focused on AI. To support people who are interested in that, we compile some relevant career i ---Source: https://aisafetyfundamentals.com/governance-blog/ai-strategy-careers--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2023-05-13
18 min
AI Safety Fundamentals
AI Governance Needs Technical Work
People who want to improve the trajectory of AI sometimes think their options for object-level work are (i) technical safety work and (ii) non-technical governance work. But that list misses things; another group of arguably promising options is technical work in AI governance, i.e. technical work that mainly boosts AI governance interventions. This post provides a brief overview of some ways to do this work—what they are, why they might be valuable, and what you can do if you’re interested. I discuss: Engineering technical levers to make AI coordination/regulation enforceable (through hardware engineering, software/ML engi...
2023-05-13
15 min
AI Safety Fundamentals
My Current Impressions on Career Choice for Longtermists
This post summarizes the way I currently think about career choice for longtermists. I have put much less time into thinking about this than 80,000 Hours, but I think it’s valuable for there to be multiple perspectives on this topic out there.Edited to add: see below for why I chose to focus on longtermism in this post.While the jobs I list overlap heavily with the jobs 80,000 Hours lists, I organize them and conceptualize them differently. 80,000 Hours tends to emphasize “paths” to particular roles working on particular causes; by con...
2023-05-13
47 min
AI Safety Fundamentals
List of EA Funding Opportunities
This is a quickly written post listing opportunities for people to apply for funding from funders that are part of the EA community. … ---First published: October 26th, 2021 Source: https://forum.effectivealtruism.org/posts/DqwxrdyQxcMQ8P2rD/list-of-ea-funding-opportunities--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2023-05-13
12 min
AI Safety Fundamentals
Some Talent Needs in AI Governance
I carried out a short project to better understand talent needs in AI governance. This post reports on my findings.How this post could be helpful:If you’re trying to upskill in AI governance, this post could help you to understand the kinds of work and skills that are in demand.If you’re a field-builder trying to find or upskill people to work in AI governance, this post could help you to understand what talent search/development efforts are especially valuable.Source:https://aisafetyfundamentals.com/governance-blog/some...
2023-05-13
15 min
AI Safety Fundamentals
AI Governance Needs Technical Work
People who want to improve the trajectory of AI sometimes think their options for object-level work are (i) technical safety work and (ii) non-technical governance work. But that list misses things; another group of arguably promising options is technical work in AI governance, i.e. technical work that mainly boosts AI governance interventions. This post provides a brief overview of some ways to do this work—what they are, why they might be valuable, and what you can do if you’re interested. I discuss:Engineering technical levers to make AI coor...
2023-05-13
14 min
AI Safety Fundamentals
China-Related AI Safety and Governance Paths
Expertise in China and its relations with the world might be critical in tackling some of the world’s most pressing problems. In particular, China’s relationship with the US is arguably the most important bilateral relationship in the world, with these two countries collectively accounting for over 40% of global GDP.1 These considerations led us to publish a guide to improving China–Western coordination on global catastrophic risks and other key problems in 2018. Since then, we have seen an increase in the number of people exploring this area.China is one of the most important countries developing and sh...
2023-05-13
47 min
The bluedot Podcast
bluedot festival 2022 - Helen Pankhurst In Conversation with Laura Bates
Welcome to the bluedot podcast with Chris Hawkins.bluedot is finally back! And after an extraordinary return to Jodrell Bank this summer, we're excited to be able to share some of the many highlights of this year's bluedot 2022.Over the coming months, you can enjoy full talks, panels and listening parties from bluedot – including headline speakers from our Mission Control arena, and intimate chats in our Notes culture tent.In this episode, you'll be hearing Helen Pankhurst in conversation with Laura Bates, the author and creator of Everyday Sexism. This ta...
2022-09-30
47 min
The bluedot Podcast
bluedot festival 2022 - A Certain Ratio Listening Party with Chris Hawkins
Welcome to the bluedot podcast, with Chris Hawkins.bluedot is finally back! And after an extraordinary return to Jodrell Bank this summer, we're excited to be able to share some of the many highlights of this year's bluedot 2022.Over the coming months, you can enjoy full talks, panels and listening parties from bluedot – including headline speakers from our Mission Control arena, and intimate chats in our Notes culture tent.We took the bluedot podcast onstage at bluedot 2022, and this In Conversation recorded live features Chris Hawkins in conversation with A Ce...
2022-09-16
50 min
The bluedot Podcast
bluedot festival 2022 - Kelly Lee Owens Listening Party with Tim Burgess
Welcome to the bluedot podcast.bluedot is finally back! And after an extraordinary return to Jodrell Bank this summer, we're excited to be able to share some of the many highlights of this year's bluedot 2022.Over the coming months, you can enjoy full talks, panels and listening parties from bluedot – including headline speakers from our Mission Control arena, and intimate chats in our Notes culture tent.This episode is a full recording of a special Tim's Listening Party, recorded on the Friday of bluedot 2022, with Kelly Lee Owens in conversation wi...
2022-09-02
1h 04
The bluedot Podcast
In Conversation with Kate Vokes, Gavin Sharp, Inga Hurst & Boshra Ghgam
Welcome to the bluedot podcast.This is the third instalment of our In Conversation miniseries of talks and panels in Manchester, powered by our friends at bruntwood. In this live discussion, we pose the question 'how does culture build community?'. Hosted by bruntwood's and The Oglesby Charitable Trust's Kate Vokes, and featuring Band On The Wall's Gavin Sharp, Inga Hirst from the Royal Exchange Theatre and actor and spoken word artist Boshra Ghgam, this panel discusses Manchester's cultural milestones, and wider implications of what culture can do for a city, and vice-versa. E...
2022-08-19
1h 00
The bluedot Podcast
In Conversation with Professor Teresa Anderson Live at Jodrell Bank
Teresa Anderson is an award-winning physicist and director of Jodrell Bank Centre for Engagement, which she founded in 2010. Alongside Tim O’Brien, Teresa spearheaded the campaign to make Jodrell Bank a UNESCO World Heritage Site, an accolade it received in 2019. Teresa co-founded Live From Jodrell Bank in 2012 and the series of shows featured Elbow, Sigur Ros, The Halle and more, expanding into the weekend of science and music you now know as bluedot, in 2016…Welcome to the bluedot podcast… with Professor Teresa Anderson! Hosted on Acast. See acast.com/privacy for mo...
2022-07-08
31 min
The bluedot Podcast
In Conversation with The Radiophonic Workshop & Stealing Sheep
It’s a unique collaboration of electronic legends and indie favourites - the past and present combining to create something futuristic and extraordinary. La Planete Sauvage – the soundtrack to an iconic 1973 film – is a project that sees The Radiophonic Workshop and Stealing Sheep join forces, for an album released to mark 2021’s Delia Derbyshire Day. And this July it comes to bluedot for a very special performance on the Sunday of this year’s festival.This is the bluedot podcast with the Radiophonic Workshop and Stealing Sheep. Hosted on Acast. See acast...
2022-07-01
29 min
The bluedot Podcast
In Conversation with Kelly Lee Owens
She’s the producer, songwriter and DJ whose avant-garde techno pop has seen her release three extraordinary albums to date. The most recent – LP.8 – was released earlier this year. Her combination of ethereal, atmospheric and at times industrial has seen her win fans in Bjork, St Vincent and John Cale, all of whom she has gone on to collaborate with. Having joined us at bluedot in 2019, she returns as part of our Friday line-up this July alongside Spiritualized, Kojey Radical, Groove Armada and more.Welcome to the bluedot podcast… with Kelly Lee Owens....
2022-06-24
37 min
The bluedot Podcast
In Conversation with Tom Heap
As part of bluedot’s partnership with our friends at Bruntwood, we’re curating a series of special In Conversation talks at Bruntwood venues across the country, hosted by me – Chris Hawkins. The first of these recently took place featuring Tom Heap, the author and writer behind 39 Ways To Save The Planet and a regular fixture on Countryfile. We spoke with Tom to a live audience at Bruntwood’s Bright Building at Manchester Science Park, and you can now enjoy the live recording of that talk in full here on the bluedot podcast. For more information about bluedot In Conversation, powered b...
2022-06-03
40 min
The bluedot Podcast
In Conversation with Porridge Radio
They’re the Brighton-founded project of songwriter Dana Margolin, whose prolific creative output has seen her go from a solo, self-releasing songwriter to the front woman of a Mercury Prize nominated band. That Mercury nomination, in 2020 for their album Every Bad, is followed by the new album Waterslide, Diving Board, Ladder to the Sky.Welcome to the bluedot podcast with Porridge Radio. Hosted on Acast. See acast.com/privacy for more information.
2022-05-20
28 min
The bluedot Podcast
In Conversation with Jane Weaver
She’s the Manchester-based producer and songwriter whose extensive career has seen her carve out a unique sound that takes in psychedelia, folk and space rock, making her the quintessential bluedot artist.After the incredible success of her 2021 album Flock, she returns to bluedot this July after her first appearance back in 2017.Welcome... to the bluedot podcast – with Jane Weaver Hosted on Acast. See acast.com/privacy for more information.
2022-05-06
28 min
The bluedot Podcast
In Conversation with Jim Al-Khalili
Today’s world is unpredictable and full of contradictions, and navigating its complexities while trying to make the best decisions is far from easy. In his new book The Joy of Science, the acclaimed physicist and bluedot favourite, Professor Jim Al-Khalili presents 8 short lessons on how to unlock the clarity, empowerment, and joy of thinking and living a little more scientifically.In this brief guide to leading a more rational life, Professor Al-Khalili invites readers to engage with the world as scientists have been trained to do. The scientific method has served human-kind well in its qu...
2022-04-22
32 min
The bluedot Podcast
In Conversation with Groove Armada
One of the UK’s best loved dance acts, the three-time GRAMMY-nominated Groove Armada have been a mainstay of club and chill-out culture for over twenty years. Since the release of their debut Northern Star and the iconic Vertigo, which tipped them into household name territory, they’ve been synonymous with a sound that’s traversed house, pop, disco and hip-hop, equal parts up and down tempo. And over the course of their nine studio albums, they’ve worked with an extraordinary array of collaborations and guest vocalists including Richie Havens, Angie Stone, Candi Staton, Neneh Cherry, PNAU’s Nick Littl...
2022-04-15
28 min
The bluedot Podcast
In Conversation with Lanterns on the Lake
Lanterns on the Lake are a Mercury Prize nominated, critically acclaimed and adored Newcastle band, whose work has seen them collaborate with the Royal Northern Sinfonia and tour with the likes of Explosion In The Sky, Yann Tiersen and Low.We’re so excited to welcome Lanterns on the Lake to join Saturday’s Lovell Stage line-up at Bluedot 2022 this July. And Hazel from the band joins us now for a special In Conversation. Hosted on Acast. See acast.com/privacy for more information.
2022-04-01
25 min
The bluedot Podcast
In Conversation with Mogwai
They’re an award-winning, legendary Scottish band whose twenty-five year career has produced ten incredible albums, unique collaborations with the likes of Clint Mansell and Nine Inch Nails, and a prolific history of soundtracking for films and tv, including the 2006 documentary film Zidane: A 21st Century Portrait and Mark Cousins’ 2015 piece Atomic.Following the Mercury Prize-nominated As The Love Continues, their tenth album and their first to hit number one in the Album Charts, we’re thrilled to welcome Mogwai to headline bluedot 2022 this July. And Stuart Braithwaite joins us for a special In Conversation....
2022-03-18
24 min
The bluedot Podcast
Jill Tarter and Ana Matronic
Chair of SETI, one of Discover magazine’s most important women in science, and the inspiration behind Jodie Foster’s Ellie in Contact, Jill Tarter is an icon of astronomy. bluedot favourite Ana Matronic met Jill via Skype to explore the cosmos and beyond. Part of bluedot's A Weekend In Outer Space, July 2020. Hosted on Acast. See acast.com/privacy for more information.
2021-01-15
53 min
The bluedot Podcast
Diversity and Representation in S.T.E.M.
Angela Saini and Tana Joseph join Jim al-Khalili for an exploration of diversity and representation in the world of bluedot, how STEM can learn from other disciplines, and what the future of our institutions looks like. Part of bluedot's A Weekend In Outer Space, July 2020. Hosted on Acast. See acast.com/privacy for more information.
2021-01-15
55 min
The bluedot Podcast
The Hitchhikers Guide to the Galaxy Reunion
Hitchhikers archivist Kevin Davies welcomes an all-star panel of Hitchhikers legends to commemorate Douglas Adams' iconic series, featuring John Lloyd, James Thrift, Sandra Dickinson, Philip Pope and Toby Longworth. Part of bluedot's A Weekend In Outer Space, July 2020. Hosted on Acast. See acast.com/privacy for more information.
2021-01-15
49 min
The bluedot Podcast
Ann Druyan with Brian Cox and Robin Ince
We join forces with our friends at The Cosmic Shambles Network for a very special 'In Conversation' with the legendary Ann Druyan, hosted by Professor Brian Cox and Robin Ince. Part of bluedot's A Weekend In Outer Space, July 2020. Hosted on Acast. See acast.com/privacy for more information.
2021-01-15
1h 01
The bluedot Podcast
Welcome
Welcome to The bluedot Podcast. Hosted on Acast. See acast.com/privacy for more information.
2021-01-15
00 min