podcast
details
.com
Print
Share
Look for any podcast host, guest or anyone
Search
Showing episodes and shows of
Bluedot
Shows
AI Safety Fundamentals
Least-To-Most Prompting Enables Complex Reasoning in Large Language Models
Chain-of-thought prompting has demonstrated remarkable performance on various natural language reasoning tasks. However, it tends to perform poorly on tasks which requires solving problems harder than the exemplars shown in the prompts. To overcome this challenge of easy-to-hard generalization, we propose a novel prompting strategy, least-to-most prompting. The key idea in this strategy is to break down a complex problem into a series of simpler subproblems and then solve them in sequence. Solving each subproblem is facilitated by the answers to previously solved subproblems. Our experimental results on tasks related to symbolic manipulation, compositional generalization, and math reasoning reveal...
2025-01-04
16 min
AI Safety Fundamentals
Progress on Causal Influence Diagrams
By Tom Everitt, Ryan Carey, Lewis Hammond, James Fox, Eric Langlois, and Shane LeggAbout 2 years ago, we released the first few papers on understanding agent incentives using causal influence diagrams. This blog post will summarize progress made since then. What are causal influence diagrams? A key problem in AI alignment is understanding agent incentives. Concerns have been raised that agents may be incentivized to avoid correction, manipulate users, or inappropriately influence their learning. This is particularly worrying as training schemes often shape incentives in subtle and surprising ways. For these reasons, we’re developing a formal th...
2025-01-04
23 min
AI Safety Fundamentals
Careers in Alignment
Richard Ngo compiles a number of resources for thinking about careers in alignment research.Original text:https://docs.google.com/document/d/1iFszDulgpu1aZcq_aYFG7Nmcr5zgOhaeSwavOMk1akw/edit#heading=h.4whc9v22p7tbNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2025-01-04
07 min
AI Safety Fundamentals
Cooperation, Conflict, and Transformative Artificial Intelligence: Sections 1 & 2 — Introduction, Strategy and Governance
Transformative artificial intelligence (TAI) may be a key factor in the long-run trajectory of civilization. A growing interdisciplinary community has begun to study how the development of TAI can be made safe and beneficial to sentient life (Bostrom 2014; Russell et al., 2015; OpenAI, 2018; Ortega and Maini, 2018; Dafoe, 2018). We present a research agenda for advancing a critical component of this effort: preventing catastrophic failures of cooperation among TAI systems. By cooperation failures we refer to a broad class of potentially-catastrophic inefficiencies in interactions among TAI-enabled actors. These include destructive conflict; coercion; and social dilemmas (Kollock, 1998; Macy and Flache, 2002) which destroy value...
2025-01-04
27 min
AI Safety Fundamentals
Logical Induction (Blog Post)
MIRI is releasing a paper introducing a new model of deductively limited reasoning: “Logical induction,” authored by Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, myself, and Jessica Taylor. Readers may wish to start with the abridged version. Consider a setting where a reasoner is observing a deductive process (such as a community of mathematicians and computer programmers) and waiting for proofs of various logical claims (such as the abc conjecture, or “this computer program has a bug in it”), while making guesses about which claims will turn out to be true. Roughly speaking, our paper presents a computable (though in...
2025-01-04
11 min
AI Safety Fundamentals
Embedded Agents
Suppose you want to build a robot to achieve some real-world goal for you—a goal that requires the robot to learn for itself and figure out a lot of things that you don’t already know. There’s a complicated engineering problem here. But there’s also a problem of figuring out what it even means to build a learning agent like that. What is it to optimize realistic goals in physical environments? In broad terms, how does it work? In this series of posts, I’ll point to four ways we don’t currently know how it works, and f...
2025-01-04
17 min
AI Safety Fundamentals
Understanding Intermediate Layers Using Linear Classifier Probes
Abstract:Neural network models have a reputation for being black boxes. We propose to monitor the features at every layer of a model and measure how suitable they are for classification. We use linear classifiers, which we refer to as "probes", trained entirely independently of the model itself. This helps us better understand the roles and dynamics of the intermediate layers. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. We apply this technique to the popular models Inception v3 and Re...
2025-01-04
16 min
AI Safety Fundamentals
Feature Visualization
There is a growing sense that neural networks need to be interpretable to humans. The field of neural network interpretability has formed in response to these concerns. As it matures, two major threads of research have begun to coalesce: feature visualization and attribution. This article focuses on feature visualization. While feature visualization is a powerful tool, actually getting it to work involves a number of details. In this article, we examine the major issues and explore common approaches to solving them. We find that remarkably simple methods can produce high-quality visualizations. Along the way we introduce a few tricks...
2025-01-04
31 min
AI Safety Fundamentals
Acquisition of Chess Knowledge in Alphazero
Abstract:What is learned by sophisticated neural network agents such as AlphaZero? This question is of both scientific and practical interest. If the representations of strong neural networks bear no resemblance to human concepts, our ability to understand faithful explanations of their decisions will be restricted, ultimately limiting what we can achieve with neural network interpretability. In this work we provide evidence that human knowledge is acquired by the AlphaZero neural network as it trains on the game of chess. By probing for a broad range of human chess concepts we show when and where...
2025-01-04
22 min
AI Safety Fundamentals
High-Stakes Alignment via Adversarial Training [Redwood Research Report]
(Update: We think the tone of this post was overly positive considering our somewhat weak results. You can read our latest post with more takeaways and followup results here.) This post motivates and summarizes this paper from Redwood Research, which presents results from the project first introduced here. We used adversarial training to improve high-stakes reliability in a task (“filter all injurious continuations of a story”) that we think is analogous to work that future AI safety engineers will need to do to reduce the risk of AI takeover. We experimented with three classes of adversaries – unaugmented humans...
2025-01-04
19 min
AI Safety Fundamentals
Introduction to Logical Decision Theory for Computer Scientists
Decision theories differ on exactly how to calculate the expectation--the probability of an outcome, conditional on an action. This foundational difference bubbles up to real-life questions about whether to vote in elections, or accept a lowball offer at the negotiating table. When you're thinking about what happens if you don't vote in an election, should you calculate the expected outcome as if only your vote changes, or as if all the people sufficiently similar to you would also decide not to vote? Questions like these belong to a larger class of problems, Newcomblike decision problems, in which some other a...
2025-01-04
14 min
AI Safety Fundamentals
Debate Update: Obfuscated Arguments Problem
This is an update on the work on AI Safety via Debate that we previously wrote about here. What we did: We tested the debate protocol introduced in AI Safety via Debate with human judges and debaters. We found various problems and improved the mechanism to fix these issues (details of these are in the appendix). However, we discovered that a dishonest debater can often create arguments that have a fatal error, but where it is very hard to locate the error. We don’t have a fix for this “obfuscated argument” problem, and beli...
2025-01-04
28 min
AI Safety Fundamentals
Robust Feature-Level Adversaries Are Interpretability Tools
Abstract: The literature on adversarial attacks in computer vision typically focuses on pixel-level perturbations. These tend to be very difficult to interpret. Recent work that manipulates the latent representations of image generators to create "feature-level" adversarial perturbations gives us an opportunity to explore perceptible, interpretable adversarial attacks. We make three contributions. First, we observe that feature-level attacks provide useful classes of inputs for studying representations in models. Second, we show that these adversaries are uniquely versatile and highly robust. We demonstrate that they can be used to produce targeted, universal, disguised, physically-realizable, and black-box attacks a...
2025-01-04
35 min
AI Safety Fundamentals
AI Safety via Red Teaming Language Models With Language Models
Abstract: Language Models (LMs) often cannot be deployed because of their potential to harm users in ways that are hard to predict in advance. Prior work identifies harmful behaviors before deployment by using human annotators to hand-write test cases. However, human annotation is expensive, limiting the number and diversity of test cases. In this work, we automatically find cases where a target LM behaves in a harmful way, by generating test cases (“red teaming”) using another LM. We evaluate the target LM’s replies to generated test questions using a classifier trained to detect offensive content...
2025-01-04
06 min
AI Safety Fundamentals
AI Safety via Debate
Abstract:To make AI systems broadly useful for challenging real-world tasks, we need them to learn complex human goals and preferences. One approach to specifying complex goals asks humans to judge during training which agent behaviors are safe and useful, but this approach can fail if the task is too complicated for a human to directly judge. To help address this concern, we propose training agents via self play on a zero sum debate game. Given a question or proposed action, two agents take turns making short statements up to a limit, then a human...
2025-01-04
39 min
AI Safety Fundamentals
Takeaways From Our Robust Injury Classifier Project [Redwood Research]
With the benefit of hindsight, we have a better sense of our takeaways from our first adversarial training project (paper). Our original aim was to use adversarial training to make a system that (as far as we could tell) never produced injurious completions. If we had accomplished that, we think it would have been the first demonstration of a deep learning system avoiding a difficult-to-formalize catastrophe with an ultra-high level of reliability. Presumably, we would have needed to invent novel robustness techniques that could have informed techniques useful for aligning TAI. With a successful system, we also could have...
2025-01-04
12 min
AI Safety Fundamentals
If-Then Commitments for AI Risk Reduction
This article from Holden Karnofsky, now a visiting scholar at the Carnegie Endowment for International Peace, discusses "If-Then" commitments as a structured approach to managing AI risks without hindering innovation. These commitments offer a framework in which specific responses are triggered when particular risks arise, allowing for a proactive and organized approach to AI safety. The article emphasizes that as AI technology rapidly advances, such predefined voluntary commitments or regulatory requirements can help guide timely interventions, ensuring that AI developments remain safe and beneficial while minimizing unnecessary delays.Original text: https://carnegieendowment.org/research/2024/09/if-then-commitments-for-ai-risk-reduction...
2025-01-02
40 min
AI Safety Fundamentals
This is How AI Will Transform How Science Gets Done
This article by Eric Schmidt, former CEO of Google, explains existing use cases for AI in the scientific community and outlines ways that sufficiently advanced, narrow AI models might transform scientific discovery in the near future. As you read, pay close attention to the existing case studies he describes.Original text: https://www.technologyreview.com/2023/07/05/1075865/eric-schmidt-ai-will-transform-science/ Author(s): Eric SchmidtA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2025-01-02
10 min
AI Safety Fundamentals
Open-Sourcing Highly Capable Foundation Models: An Evaluation of Risks, Benefits, and Alternative Methods for Pursuing Open-Source Objectives
This resource is the second of two on the benefits and risks of open-weights model release. In contrast, this paper expresses strong skepticism toward releasing highly capable foundation model weights, arguing that the risks may outweigh the benefits. While recognizing the advantages of openness, such as encouraging innovation and external oversight, it warns that making models publicly available increases the potential for misuse, including cyberattacks, biological weapon development, and disinformation. The article emphasizes that malicious actors could easily disable safeguards, fine-tune models for harmful purposes, and exploit vulnerabilities. Instead of fully open releases, it advocates for safer alternatives like...
2024-12-30
56 min
AI Safety Fundamentals
So You Want to be a Policy Entrepreneur?
This paper by academic Michael Mintrom defines policy entrepreneurs as "energetic actors who engage in collaborative efforts in and around government to promote policy innovations". He describes five methods they use: Problem framing, Using and expanding networks, Working with advocacy coalitions, Leading by example, and Scaling up change processes.Mintrom authored this piece focusing on the impacts of climate change, noting that it is an "enormous challenge now facing humanity", and that "no area of government activity will be immune from the disruptions to come". As you read, consider the ways in which AI governance parallels and...
2024-12-30
41 min
AI Safety Fundamentals
Considerations for Governing Open Foundation Models
This resource is the first of two on the benefits and risks of open-weights model release. This paper broadly supports the open release of foundation model weights, arguing that such openness can drive competition, enhance innovation, and promote transparency. It contends that open models can distribute power more evenly across society, reducing the risk of market monopolies and fostering a diverse ecosystem of AI development. Despite potential risks like disinformation or misuse by malicious actors, the article argues that current evidence about these risks remains limited. It suggests that regulatory interventions might disproportionately harm developers, particularly if policies fail...
2024-12-30
26 min
AI Safety Fundamentals
Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate
In the fall of 2023, the US Bipartisan Senate AI Working Group held insight forms with global leaders. Participants included the leaders of major AI labs, tech companies, major organizations adopting and implementing AI throughout the wider economy, union leaders, academics, advocacy groups, and civil society organizations. This document, released on March 15, 2024, is the culmination of those discussions. It provides a roadmap that US policy is likely to follow as the US Senate begins to create legislation.Original text: https://www.politico.com/f/?id=0000018f-79a9-d62d-ab9f-f9af975d0000 ...
2024-05-22
36 min
AI Safety Fundamentals
The AI Triad and What It Means for National Security Strategy
In this paper from CSET, Ben Buchanan outlines a framework for understanding the inputs that power machine learning. Called "the AI Triad", it focuses on three inputs: algorithms, data, and compute.Original text: https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Triad-Report.pdf Author(s):Ben BuchananA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-05-20
39 min
AI Safety Fundamentals
Societal Adaptation to Advanced AI
This paper explores the under-discussed strategies of adaptation and resilience to mitigate the risks of advanced AI systems. The authors present arguments supporting the need for societal AI adaptation, create a framework for adaptation, offer examples of adapting to AI risks, outline the concept of resilience, and provide concrete recommendations for policymakers.Original text: https://drive.google.com/file/d/1k3uqK0dR9hVyG20-eBkR75_eYP2efolS/view?usp=sharing Author(s): Jamie Bernardi, Gabriel Mukobi, Hilary Greaves, Lennart Heim, and Markus AnderljungA podcast by BlueDot I...
2024-05-20
46 min
AI Safety Fundamentals
OECD AI Principles
This document from the OECD is split into two sections: principles for responsible stewardship of trustworthy AI & national policies and international co-operation for trustworthy AI. 43 governments around the world have agreed to adhere to the document. While originally written in 2019, updates were made in 2024 which are reflected in this version.Original text: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 Author(s): The Organization for Economic Cooperation and DevelopmentA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-05-13
23 min
AI Safety Fundamentals
The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023
This statement was released by the UK Government as part of their Global AI Safety Summit from November 2023. It notes that frontier models pose unique risks and calls for international cooperation, finding that "many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation." It was signed by multiple governments, including the US, EU, India, and China.Original text: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 Author(s): UK GovernmentA podcast by BlueDot Im...
2024-05-13
08 min
AI Safety Fundamentals
Key facts: UNESCO’s Recommendation on the Ethics of Artificial Intelligence
This summary of UNESCO's Recommendation on the Ethics of AI outlines four core values, ten core principles, and eleven actionable policies for responsible AI governance. The full text was agreed to by all 193 member states of the United Nations.Original text: https://unesdoc.unesco.org/ark:/48223/pf0000385082 Author(s): The United Nations Educational, Scientific, and Cultural OrganziationA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-05-13
20 min
AI Safety Fundamentals
A pro-innovation approach to AI regulation: government response
This report by the UK's Department for Science, Technology, and Innovation outlines a regulatory framework for UK AI policy. Per the report, "AI is helping to make our jobs safer and more satisfying, conserve our wildlife and fight climate change, and make our public services more efficient. Not only do we need to plan for the capabilities and uses of the AI systems we have today, but we must also prepare for a near future where the most powerful systems are broadly accessible and significantly more capable"Original text: https://www.gov.uk/government/consultations...
2024-05-13
38 min
AI Safety Fundamentals
China’s AI Regulations and How They Get Made
This report from the Carnegie Endowment for International Peace summarizes Chinese AI policy as of mid-2023. It also provides analysis of the factors motivating Chinese AI Governance. We're providing a more structured analysis to Chinese AI policy relative to other governments because we expect learners will be less familiar with the Chinese policy process.Original text:https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117Author(s): Matt SheehanA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-05-13
27 min
AI Safety Fundamentals
AI Index Report 2024, Chapter 7: Policy and Governance
This yearly report from Stanford’s Center for Humane AI tracks AI governance actions and broader trends in policies and legislation by governments around the world in 2023. It includes a summary of major policy actions taken by different governments, as well as analyses of regulatory trends, the volume of AI legislation, and different focus areas governments are prioritizing in their interventions.Original text:https://aiindex.stanford.edu/wp-content/uploads/2024/04/HAI_AI-Index-Report-2024_Chapter_7.pdfAuthors:Nestor Maslej et al. A podcast by BlueDot Impact.
2024-05-13
20 min
AI Safety Fundamentals
Recent U.S. Efforts on AI Policy
This high-level overview by CISA summarizes major US policies on AI at the federal level. Important items worth further investigation include Executive Order 14110, the voluntary commitments, the AI Bill of Rights, and Executive Order 13859.Original text: https://www.cisa.gov/ai/recent-efforts Author(s): The US Cybersecurity and Infrastructure Security AgencyA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-05-13
05 min
AI Safety Fundamentals
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
This fact sheet from The White House summarizes President Biden's AI Executive Order from October 2023. The President's AI EO represents the most aggressive approach to date from the US executive branch on AI policy.Original text: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ Author(s): The White HouseA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-05-13
14 min
AI Safety Fundamentals
High-level summary of the AI Act
This primer by the Future of Life Institute highlights core elements of the EU AI Act. It includes a high level summary alongside explanations of different restrictions on prohibited AI systems, high-risk AI systems, and general purpose AI.Original text: https://artificialintelligenceact.eu/high-level-summary/Author(s): The Future of Life InstituteA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-05-13
18 min
AI Safety Fundamentals
The Policy Playbook: Building a Systems-Oriented Approach to Technology and National Security Policy
This report by the Center for Security and Emerging Technology first analyzes the tensions and tradeoffs between three strategic technology and national security goals: driving technological innovation, impeding adversaries’ progress, and promoting safe deployment. It then identifies different direct and enabling policy levers, assessing each based on the tradeoffs they make.While this document is designed for US policymakers, most of its findings are broadly applicable.Original text:https://cset.georgetown.edu/wp-content/uploads/The-Policy-Playbook.pdfAuthors:Jack Corrigan, Melissa Flagg, and Dewi MurdickA po...
2024-05-05
56 min
AI Safety Fundamentals
Strengthening Resilience to AI Risk: A Guide for UK Policymakers
This report from the Centre for Emerging Technology and Security and the Centre for Long-Term Resilience identifies different levers as they apply to different stages of the AI life cycle. They split the AI lifecycle into three stages: design, training, and testing; deployment and usage; and longer-term deployment and diffusion. It also introduces a risk mitigation hierarchy to rank different approaches in decreasing preference, arguing that “policy interventions will be most effective if they intervene at the point in the lifecycle where risk first arises.” While this document is designed for UK policymakers, most of its findings are b...
2024-05-04
24 min
AI Safety Fundamentals
The Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing Catastrophe
This report by the Nuclear Threat Initiative primarily focuses on how AI's integration into biosciences could advance biotechnology but also poses potentially catastrophic biosecurity risks. It’s included as a core resource this week because the assigned pages offer a valuable case study into an under-discussed lever for AI risk mitigation: building resilience. Resilience in a risk reduction context is defined by the UN as “the ability of a system, community or society exposed to hazards to resist, absorb, accommodate, adapt to, transform and recover from the effects of a hazard in a timely and efficient manner, inclu...
2024-05-04
08 min
AI Safety Fundamentals
Rogue AIs
This excerpt from CAIS’s AI Safety, Ethics, and Society textbook provides a deep dive into the CAIS resource from session three, focusing specifically on the challenges of controlling advanced AI systems.Original Text:https://www.aisafetybook.com/textbook/1-5Author:The Center for AI SafetyA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-05-01
34 min
AI Safety Fundamentals
What is AI Alignment?
To solve rogue AIs, we’ll have to align them. In this article by Adam Jones of BlueDot Impact, Jones introduces the concept of aligning AIs. He defines alignment as “making AI systems try to do what their creators intend them to do.” Original text:https://aisafetyfundamentals.com/blog/what-is-ai-alignment/Author:Adam JonesA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-05-01
11 min
AI Safety Fundamentals
An Overview of Catastrophic AI Risks
This article from the Center for AI Safety provides an overview of ways that advanced AI could cause catastrophe. It groups catastrophic risks into four categories: malicious use, AI race, organizational risk, and rogue AIs. The article is a summary of a larger paper that you can read by clicking here.Original text:https://www.safe.ai/ai-riskAuthors:Dan Hendrycks, Thomas Woodside, Mantas MazeikaA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-04-29
45 min
AI Safety Fundamentals
Future Risks of Frontier AI
This report from the UK’s Government Office for Science envisions five potential risk scenarios from frontier AI. Each scenario includes information on the AI system’s capabilities, ownership and access, safety, level and distribution of use, and geopolitical context. It provides key policy issues for each scenario and concludes with an overview of existential risk. If you have extra time, we’d recommend you read the entire document.Original text:https://assets.publishing.service.gov.uk/media/653bc393d10f3500139a6ac5/future-risks-of-frontier-ai-annex-a.pdfAuthor:The UK Government Office...
2024-04-23
40 min
AI Safety Fundamentals
What risks does AI pose?
This resource, written by Adam Jones at BlueDot Impact, provides a comprehensive overview of the existing and anticipated risks of AI. As you're going through the reading, consider what different futures might look like should different combinations of risks materialize.Original text:https://aisafetyfundamentals.com/blog/ai-risks/Author:Adam JonesA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-04-23
24 min
AI Safety Fundamentals
AI Could Defeat All Of Us Combined
This blog post from Holden Karnofsky, Open Philanthropy’s Director of AI Strategy, explains how advanced AI might overpower humanity. It summarizes superintelligent takeover arguments and provides a scenario where human-level AI disempowers humans without achieving superintelligence. As Holden summarizes: “if there's something with human-like skills, seeking to disempower humanity, with a population in the same ballpark as (or larger than) that of all humans, we've got a civilization-level problem."Original text:https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#the-standard-argument-superintelligence-and-advanced-technologyAuthors:Holden KarnofskyA podcast by BlueDot Impact....
2024-04-22
23 min
AI Safety Fundamentals
Positive AI Economic Futures
This insight report from the World Economic Forum summarizes some positive AI outcomes. Some proposed futures include AI enabling shared economic benefit, creating more fulfilling jobs, or allowing humans to work less – giving them time to pursue more satisfying activities like volunteering, exploration, or self-improvement. It also discusses common problems that prevent people from making good predictions about the future.Note: this report was released before ChatGPT, which seems to have shifted expert predictions about when AI systems might be broadly capable at completing most cognitive labor (see Section 3 exhibit 6 of the McKinsey resource below). Keep this in...
2024-04-16
21 min
AI Safety Fundamentals
The Economic Potential of Generative AI: The Next Productivity Frontier
This report from McKinsey discusses the huge potential for economic growth that generative AI could bring, examining key drivers and exploring potential productivity boosts in different business functions. While reading, evaluate how realistic its claims are, and how this might affect the organization you work at (or organizations you might work at in the future).Original text:https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontierAuthors:Michael Chui et al.A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2024-04-16
42 min
AI Safety Fundamentals
Moore's Law for Everything
This blog by Sam Altman, the CEO of OpenAI, provides insight into what AI company leaders are saying and thinking about their reasons for pursuing advanced AI. It lays out how Altman thinks the world will change because of AI and what policy changes he believes we will need to make.As you’re reading, consider Altman’s position and how it might affect the way he discusses this technology or his policy recommendations.Original text:https://moores.samaltman.comAuthor:Sam AltmanA podcast by B...
2024-04-16
17 min
AI Safety Fundamentals
The Transformative Potential of Artificial Intelligence
This paper by Ross Gruetzemacher and Jess Whittlestone examines the concept of transformative AI, which significantly impacts society without necessarily achieving human-level cognitive abilities. The authors propose three categories of transformation: Narrowly Transformative AI, affecting specific domains like the military; Transformative AI, causing broad changes akin to general-purpose technologies such as electricity; and Radically Transformative AI, inducing profound societal shifts comparable to the Industrial Revolution. Note: this resource uses “GPT” to refer to general purpose technologies, which they define as “a technology that initially has much scope for improvement and eventually comes to be widely used.” Keep in mind...
2024-04-16
49 min
AI Safety Fundamentals
The AI Triad and What It Means for National Security Strategy
A single sentence can summarize the complexities of modern artificial intelligence: Machine learning systems use computing power to execute algorithms that learn from data. Everything that national security policymakers truly need to know about a technology that seems simultaneously trendy, powerful, and mysterious is captured in those 13 words. They specify a paradigm for modern AI—machine learning—in which machines draw their own insights from data, unlike the human-driven expert systems of the past. The same sentence also introduces the AI triad of algorithms, data, and computing power. Each element is vital to the power...
2023-05-13
27 min
AI Safety Fundamentals
A Short Introduction to Machine Learning
Despite the current popularity of machine learning, I haven’t found any short introductions to it which quite match the way I prefer to introduce people to the field. So here’s my own. Compared with other introductions, I’ve focused less on explaining each concept in detail, and more on explaining how they relate to other important concepts in AI, especially in diagram form. If you're new to machine learning, you shouldn't expect to fully understand most of the concepts explained here just after reading this post - the goal is instead to provide a broad framework which will c...
2023-05-13
17 min
AI Safety Fundamentals
Visualizing the Deep Learning Revolution
The field of AI has undergone a revolution over the last decade, driven by the success of deep learning techniques. This post aims to convey three ideas using a series of illustrative examples:There have been huge jumps in the capabilities of AIs over the last decade, to the point where it’s becoming hard to specify tasks that AIs can’t do.This progress has been primarily driven by scaling up a handful of relatively simple algorithms (rather than by developing a more principled or scientific understanding of deep learning).Very few people predicted that progress would be a...
2023-05-13
41 min
AI Safety Fundamentals
Overview of How AI Might Exacerbate Long-Running Catastrophic Risks
Developments in AI could exacerbate long-running catastrophic risks, including bioterrorism, disinformation and resulting institutional dysfunction, misuse of concentrated power, nuclear and conventional war, other coordination failures, and unknown risks. This document compiles research on how AI might raise these risks. (Other material in this course discusses more novel risks from AI.) We draw heavily from previous overviews by academics, particularly Dafoe (2020) and Hendrycks et al. (2023).Source:https://aisafetyfundamentals.com/governance-blog/overview-of-ai-risk-exacerbationNarrated for AI Safety...
2023-05-13
24 min
AI Safety Fundamentals
The Need for Work on Technical AI Alignment
This page gives an overview of the alignment problem. It describes our motivation for running courses about technical AI alignment. The terminology should be relatively broadly accessible (not assuming any previous knowledge of AI alignment or much knowledge of AI/computer science).This piece describes the basic case for AI alignment research, which is research that aims to ensure that advanced AI systems can be controlled or guided towards the intended goals of their designers. Without such work, advanced AI systems could potentially act in ways that are severely at odds with their designers’ in...
2023-05-13
34 min
AI Safety Fundamentals
As AI Agents Like Auto-GPT Speed up Generative AI Race, We All Need to Buckle Up
If you thought the pace of AI development had sped up since the release of ChatGPT last November, well, buckle up. Thanks to the rise of autonomous AI agents like Auto-GPT, BabyAGI and AgentGPT over the past few weeks, the race to get ahead in AI is just getting faster. And, many experts say, more concerning.Source:https://venturebeat.com/ai/as-ai-agents-like-auto-gpt-speed-up-generative-ai-race-we-all-need-to-buckle-up-the-ai-beat/Narrated for AI Safety Fundamentals by TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more...
2023-05-13
07 min
AI Safety Fundamentals
Specification Gaming: The Flip Side of AI Ingenuity
Specification gaming is a behaviour that satisfies the literal specification of an objective without achieving the intended outcome. We have all had experiences with specification gaming, even if not by this name. Readers may have heard the myth of King Midas and the golden touch, in which the king asks that anything he touches be turned to gold - but soon finds that even food and drink turn to metal in his hands. In the real world, when rewarded for doing well on a homework assignment, a student might copy another student to get the right answers, rather than...
2023-05-13
13 min
AI Safety Fundamentals
Why Might Misaligned, Advanced AI Cause Catastrophe?
You may have seen arguments (such as these) for why people might create and deploy advanced AI that is both power-seeking and misaligned with human interests. This may leave you thinking, “OK, but would such AI systems really pose catastrophic threats?” This document compiles arguments for the claim that misaligned, power-seeking, advanced AI would pose catastrophic risks.We’ll see arguments for the following claims, which are mostly separate/independent reasons for concern:Humanity’s past holds concerning analogiesAI systems have some major inherent advantages over humansAIs could come to...
2023-05-13
20 min
AI Safety Fundamentals
AI Safety Seems Hard to Measure
In previous pieces, I argued that there’s a real and large risk of AI systems’ developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening. A young, growing field of AI safety research tries to reduce this risk, by finding ways to ensure that AI systems behave as intended (rather than forming ambitious aims of their own and deceiving and manipulating humans as needed to accomplish them).Maybe we’ll succeed in reducing the risk, and maybe...
2023-05-13
22 min
AI Safety Fundamentals
Nobody’s on the Ball on AGI Alignment
Observing from afar, it’s easy to think there’s an abundance of people working on AGI safety. Everyone on your timeline is fretting about AI risk, and it seems like there is a well-funded EA-industrial-complex that has elevated this to their main issue. Maybe you’ve even developed a slight distaste for it all—it reminds you a bit too much of the woke and FDA bureaucrats, and Eliezer seems pretty crazy to you.That’s what I used to think too, a couple of years ago. Then I got to see things more up close. And here’s...
2023-05-13
17 min
AI Safety Fundamentals
Emergent Deception and Emergent Optimization
I’ve previously argued that machine learning systems often exhibit emergent capabilities, and that these capabilities could lead to unintended negative consequences. But how can we reason concretely about these consequences? There’s two principles I find useful for reasoning about future emergent capabilities:If a capability would help get lower training loss, it will likely emerge in the future, even if we don’t observe much of it now.As ML models get larger and are trained on more and better data, simpler heuristics will tend to get replaced by more complex heuristics.Using...
2023-05-13
33 min
AI Safety Fundamentals
Avoiding Extreme Global Vulnerability as a Core AI Governance Problem
Much has been written framing and articulating the AI governance problem from a catastrophic risks lens, but these writings have been scattered. This page aims to provide a synthesized introduction to some of these already prominent framings. This is just one attempt at suggesting an overall frame for thinking about some AI governance problems; it may miss important things. Some researchers think that unsafe development or misuse of AI could cause massive harms. A key contributor to some of these risks is that catastrophe may not require all or most relevant decision makers to make harmful decisions. Instead, harmful...
2023-05-13
11 min
AI Safety Fundamentals
Frontier AI Regulation: Managing Emerging Risks to Public Safety
Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term “frontier AI” models — highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety. Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and, it is difficult to stop a model’s capabilities from proliferating broadly. To address these challenges, at least three building blocks for the regulation of front...
2023-05-13
29 min
AI Safety Fundamentals
Model Evaluation for Extreme Risks
Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why model evaluation is critical for addressing extreme risks. Developers must be able to identify dangerous capabilities (through “dangerous capability evaluations”) and the propensity of models to apply their capabilities for harm (through “alignment evaluations”). These evaluations will become critical for keeping policymakers and other stakeholders informed, and for making responsible decisions about model training, deployment, and security.
2023-05-13
56 min
AI Safety Fundamentals
Primer on Safety Standards and Regulations for Industrial-Scale AI Development
This primer introduces various aspects of safety standards and regulations for industrial-scale AI development: what they are, their potential and limitations, some proposals for their substance, and recent policy developments. Key points are:Standards are formal specifications of best practices, which can influence regulations. Regulations are requirements established by governments.Cutting-edge AI development is being done with individual companies spending over $100 million dollars. This industrial scale may enable narrowly targeted and enforceable regulation to reduce the risks of cutting-edge AI development.Regulation of industrial-scale AI development faces various...
2023-05-13
15 min
AI Safety Fundamentals
Racing Through a Minefield: The AI Deployment Problem
Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?Source:https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/Crossposted from the Cold Takes Audio podcast.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2023-05-13
21 min
AI Safety Fundamentals
Choking off China’s Access to the Future of AI
Introduction On October 7, 2022, the Biden administration announced a new export controls policy on artificial intelligence (AI) and semiconductor technologies to China. These new controls—a genuine landmark in U.S.-China relations—provide the complete picture after a partial disclosure in early September generated confusion. For weeks the Biden administration has been receiving criticism in many quarters for a new round of semiconductor export control restrictions, first disclosed on September 1. The restrictions block leading U.S. AI computer chip designers, such as Nvidia and AMD, from selling their high-end chips for AI and supercomputing to China. The crit...
2023-05-13
07 min
AI Safety Fundamentals
Primer on AI Chips and AI Governance
If governments could regulate the large-scale use of “AI chips,” that would likely enable governments to govern frontier AI development—to decide who does it and under what rules.In this article, we will use the term “AI chips” to refer to cutting-edge, AI-specialized computer chips (such as NVIDIA’s A100 and H100 or Google’s TPUv4).Frontier AI models like GPT-4 are already trained using tens of thousands of AI chips, and trends suggest that more advanced AI will require even more computing power.Source: https://ai...
2023-05-13
25 min
AI Safety Fundamentals
The State of AI in Different Countries — An Overview
Some are concerned that regulating AI progress in one country will slow that country down, putting it at a disadvantage in a global AI arms race. Many proponents of AI regulation disagree; they have pushed back on the overall framework, pointed out serious drawbacks and limitations of racing, and argued that regulations do not have to slow progress down. Another disagreement is about whether countries are in fact in a neck and neck arms race; some believe that the United States and its allies have a significant lead which would allow for regulation even if t...
2023-05-13
36 min
AI Safety Fundamentals
What Does It Take to Catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring
As advanced machine learning systems’ capabilities begin to play a significant role in geopolitics and societal order, it may become imperative that (1) governments be able to enforce rules on the development of advanced ML systems within their borders, and (2) countries be able to verify each other’s compliance with potential future international agreements on advanced ML development. This work analyzes one mechanism to achieve this, by monitoring the computing hardware used for large-scale NN training. The framework’s primary goal is to provide governments high confidence that no actor uses large quantities of specialized ML chips to execute a traini...
2023-05-13
32 min
AI Safety Fundamentals
A Tour of Emerging Cryptographic Technologies
Historically, progress in the field of cryptography has been enormously consequential. Over the past century, for instance, cryptographic discoveries have played a key role in a world war and made it possible to use the internet for business and private communication. In the interest of exploring the impact the field may have in the future, I consider a suite of more recent developments. My primary focus is on blockchain-based technologies (such as cryptocurrencies) and on techniques for computing on confidential data (such as secure multiparty computation). I provide an introduction to these technologies that assumes no mathematical background or...
2023-05-13
30 min
AI Safety Fundamentals
Historical Case Studies of Technology Governance and International Agreements
The following excerpts summarize historical case studies that are arguably informative for AI governance. The case studies span nuclear arms control, militaries’ adoption of electricity, and environmental agreements. (For ease of reading, we have edited the formatting of the following excerpts and added bolding.)Source:https://aisafetyfundamentals.com/governance-blog/historical-case-studiesNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2023-05-13
35 min
AI Safety Fundamentals
12 Tentative Ideas for Us AI Policy
About two years ago, I wrote that “it’s difficult to know which ‘intermediate goals’ [e.g. policy goals] we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI.” Much has changed since then, and in this post I give an update on 12 ideas for US policy goals[1]Many […] The original text contained 7 footnotes which were omitted from this narration.---Source: https://www.openphilanthropy.org/research/12-tentative-ideas-for-us-ai-policy--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact.Learn...
2023-05-13
09 min
AI Safety Fundamentals
Let’s Think About Slowing Down AI
If you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous. The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and...
2023-05-13
1h 14
AI Safety Fundamentals
What AI Companies Can Do Today to Help With the Most Important Century
I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread and how to help via full-time work.This piece is about what major AI companies can do (and not do) to be helpful. By “major AI companies,” I mean the sorts of AI companies that are advancing the state of the art, and/or could play a major role in how very powerful AI systems end up getting used.1This piece could be useful to people who work at tho...
2023-05-13
18 min
AI Safety Fundamentals
OpenAI Charter
Our Charter describes the principles we use to execute on OpenAI’s mission. ---Source: https://openai.com/charter--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2023-05-13
02 min
AI Safety Fundamentals
LP Announcement by OpenAI
We’ve created OpenAI LP, a new “capped-profit” company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission. The original text contained 1 footnote which was omitted from this narration.---Source: https://openai.com/blog/openai-lp--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2023-05-13
06 min
AI Safety Fundamentals
International Institutions for Advanced AI
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity. International collaborations can unlock AI’s ability to further sustainable development, and coordination of regulatory efforts can reduce obstacles to innovation and the spread of benefits. Conversely, the potential dangerous capabilities of powerful and general-purpose AI systems create global externalities in their development and deployment, and international efforts to further responsible AI practices could help manage the risks they pose. This paper identifies a set of governance functions that could be performed at an international level to address these challenges, ranging from supporting ac...
2023-05-13
42 min
AI Safety Fundamentals
China-Related AI Safety and Governance Paths
Expertise in China and its relations with the world might be critical in tackling some of the world’s most pressing problems. In particular, China’s relationship with the US is arguably the most important bilateral relationship in the world, with these two countries collectively accounting for over 40% of global GDP.1 These considerations led us to publish a guide to improving China–Western coordination on global catastrophic risks and other key problems in 2018. Since then, we have seen an increase in the number of people exploring this area.China is one of the most important countries developing and sh...
2023-05-13
47 min
AI Safety Fundamentals
AI Governance Needs Technical Work
People who want to improve the trajectory of AI sometimes think their options for object-level work are (i) technical safety work and (ii) non-technical governance work. But that list misses things; another group of arguably promising options is technical work in AI governance, i.e. technical work that mainly boosts AI governance interventions. This post provides a brief overview of some ways to do this work—what they are, why they might be valuable, and what you can do if you’re interested. I discuss:Engineering technical levers to make AI coor...
2023-05-13
14 min
AI Safety Fundamentals
Career Resources on AI Strategy Research
(Last updated August 31, 2022) Summary and Introduction One potential way to improve the impacts of AI is helping various actors figure out good AI strategies—that is, good high-level plans focused on AI. To support people who are interested in that, we compile some relevant career i ---Source: https://aisafetyfundamentals.com/governance-blog/ai-strategy-careers--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2023-05-13
18 min
AI Safety Fundamentals
Some Talent Needs in AI Governance
I carried out a short project to better understand talent needs in AI governance. This post reports on my findings.How this post could be helpful:If you’re trying to upskill in AI governance, this post could help you to understand the kinds of work and skills that are in demand.If you’re a field-builder trying to find or upskill people to work in AI governance, this post could help you to understand what talent search/development efforts are especially valuable.Source:https://aisafetyfundamentals.com/governance-blog/some...
2023-05-13
15 min
AI Safety Fundamentals
List of EA Funding Opportunities
This is a quickly written post listing opportunities for people to apply for funding from funders that are part of the EA community. … ---First published: October 26th, 2021 Source: https://forum.effectivealtruism.org/posts/DqwxrdyQxcMQ8P2rD/list-of-ea-funding-opportunities--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
2023-05-13
12 min
AI Safety Fundamentals
My Current Impressions on Career Choice for Longtermists
This post summarizes the way I currently think about career choice for longtermists. I have put much less time into thinking about this than 80,000 Hours, but I think it’s valuable for there to be multiple perspectives on this topic out there.Edited to add: see below for why I chose to focus on longtermism in this post.While the jobs I list overlap heavily with the jobs 80,000 Hours lists, I organize them and conceptualize them differently. 80,000 Hours tends to emphasize “paths” to particular roles working on particular causes; by con...
2023-05-13
47 min
AI Safety Fundamentals
AI Governance Needs Technical Work
People who want to improve the trajectory of AI sometimes think their options for object-level work are (i) technical safety work and (ii) non-technical governance work. But that list misses things; another group of arguably promising options is technical work in AI governance, i.e. technical work that mainly boosts AI governance interventions. This post provides a brief overview of some ways to do this work—what they are, why they might be valuable, and what you can do if you’re interested. I discuss: Engineering technical levers to make AI coordination/regulation enforceable (through hardware engineering, software/ML engi...
2023-05-13
15 min
The bluedot Podcast
bluedot festival 2022 - Helen Pankhurst In Conversation with Laura Bates
Welcome to the bluedot podcast with Chris Hawkins.bluedot is finally back! And after an extraordinary return to Jodrell Bank this summer, we're excited to be able to share some of the many highlights of this year's bluedot 2022.Over the coming months, you can enjoy full talks, panels and listening parties from bluedot – including headline speakers from our Mission Control arena, and intimate chats in our Notes culture tent.In this episode, you'll be hearing Helen Pankhurst in conversation with Laura Bates, the author and creator of Everyday Sexism. This ta...
2022-09-30
47 min
The bluedot Podcast
bluedot festival 2022 - A Certain Ratio Listening Party with Chris Hawkins
Welcome to the bluedot podcast, with Chris Hawkins.bluedot is finally back! And after an extraordinary return to Jodrell Bank this summer, we're excited to be able to share some of the many highlights of this year's bluedot 2022.Over the coming months, you can enjoy full talks, panels and listening parties from bluedot – including headline speakers from our Mission Control arena, and intimate chats in our Notes culture tent.We took the bluedot podcast onstage at bluedot 2022, and this In Conversation recorded live features Chris Hawkins in conversation with A Ce...
2022-09-16
50 min
The bluedot Podcast
bluedot festival 2022 - Kelly Lee Owens Listening Party with Tim Burgess
Welcome to the bluedot podcast.bluedot is finally back! And after an extraordinary return to Jodrell Bank this summer, we're excited to be able to share some of the many highlights of this year's bluedot 2022.Over the coming months, you can enjoy full talks, panels and listening parties from bluedot – including headline speakers from our Mission Control arena, and intimate chats in our Notes culture tent.This episode is a full recording of a special Tim's Listening Party, recorded on the Friday of bluedot 2022, with Kelly Lee Owens in conversation wi...
2022-09-02
1h 04
The bluedot Podcast
In Conversation with Kate Vokes, Gavin Sharp, Inga Hurst & Boshra Ghgam
Welcome to the bluedot podcast.This is the third instalment of our In Conversation miniseries of talks and panels in Manchester, powered by our friends at bruntwood. In this live discussion, we pose the question 'how does culture build community?'. Hosted by bruntwood's and The Oglesby Charitable Trust's Kate Vokes, and featuring Band On The Wall's Gavin Sharp, Inga Hirst from the Royal Exchange Theatre and actor and spoken word artist Boshra Ghgam, this panel discusses Manchester's cultural milestones, and wider implications of what culture can do for a city, and vice-versa. E...
2022-08-19
1h 00
The bluedot Podcast
In Conversation with Professor Teresa Anderson Live at Jodrell Bank
Teresa Anderson is an award-winning physicist and director of Jodrell Bank Centre for Engagement, which she founded in 2010. Alongside Tim O’Brien, Teresa spearheaded the campaign to make Jodrell Bank a UNESCO World Heritage Site, an accolade it received in 2019. Teresa co-founded Live From Jodrell Bank in 2012 and the series of shows featured Elbow, Sigur Ros, The Halle and more, expanding into the weekend of science and music you now know as bluedot, in 2016…Welcome to the bluedot podcast… with Professor Teresa Anderson! Hosted on Acast. See acast.com/privacy for mo...
2022-07-08
31 min
The bluedot Podcast
In Conversation with The Radiophonic Workshop & Stealing Sheep
It’s a unique collaboration of electronic legends and indie favourites - the past and present combining to create something futuristic and extraordinary. La Planete Sauvage – the soundtrack to an iconic 1973 film – is a project that sees The Radiophonic Workshop and Stealing Sheep join forces, for an album released to mark 2021’s Delia Derbyshire Day. And this July it comes to bluedot for a very special performance on the Sunday of this year’s festival.This is the bluedot podcast with the Radiophonic Workshop and Stealing Sheep. Hosted on Acast. See acast...
2022-07-01
29 min
The bluedot Podcast
In Conversation with Kelly Lee Owens
She’s the producer, songwriter and DJ whose avant-garde techno pop has seen her release three extraordinary albums to date. The most recent – LP.8 – was released earlier this year. Her combination of ethereal, atmospheric and at times industrial has seen her win fans in Bjork, St Vincent and John Cale, all of whom she has gone on to collaborate with. Having joined us at bluedot in 2019, she returns as part of our Friday line-up this July alongside Spiritualized, Kojey Radical, Groove Armada and more.Welcome to the bluedot podcast… with Kelly Lee Owens....
2022-06-24
37 min
The bluedot Podcast
In Conversation with Tom Heap
As part of bluedot’s partnership with our friends at Bruntwood, we’re curating a series of special In Conversation talks at Bruntwood venues across the country, hosted by me – Chris Hawkins. The first of these recently took place featuring Tom Heap, the author and writer behind 39 Ways To Save The Planet and a regular fixture on Countryfile. We spoke with Tom to a live audience at Bruntwood’s Bright Building at Manchester Science Park, and you can now enjoy the live recording of that talk in full here on the bluedot podcast. For more information about bluedot In Conversation, powered b...
2022-06-03
40 min
The bluedot Podcast
In Conversation with Porridge Radio
They’re the Brighton-founded project of songwriter Dana Margolin, whose prolific creative output has seen her go from a solo, self-releasing songwriter to the front woman of a Mercury Prize nominated band. That Mercury nomination, in 2020 for their album Every Bad, is followed by the new album Waterslide, Diving Board, Ladder to the Sky.Welcome to the bluedot podcast with Porridge Radio. Hosted on Acast. See acast.com/privacy for more information.
2022-05-20
28 min
The bluedot Podcast
In Conversation with Jane Weaver
She’s the Manchester-based producer and songwriter whose extensive career has seen her carve out a unique sound that takes in psychedelia, folk and space rock, making her the quintessential bluedot artist.After the incredible success of her 2021 album Flock, she returns to bluedot this July after her first appearance back in 2017.Welcome... to the bluedot podcast – with Jane Weaver Hosted on Acast. See acast.com/privacy for more information.
2022-05-06
28 min
The bluedot Podcast
In Conversation with Jim Al-Khalili
Today’s world is unpredictable and full of contradictions, and navigating its complexities while trying to make the best decisions is far from easy. In his new book The Joy of Science, the acclaimed physicist and bluedot favourite, Professor Jim Al-Khalili presents 8 short lessons on how to unlock the clarity, empowerment, and joy of thinking and living a little more scientifically.In this brief guide to leading a more rational life, Professor Al-Khalili invites readers to engage with the world as scientists have been trained to do. The scientific method has served human-kind well in its qu...
2022-04-22
32 min
The bluedot Podcast
In Conversation with Groove Armada
One of the UK’s best loved dance acts, the three-time GRAMMY-nominated Groove Armada have been a mainstay of club and chill-out culture for over twenty years. Since the release of their debut Northern Star and the iconic Vertigo, which tipped them into household name territory, they’ve been synonymous with a sound that’s traversed house, pop, disco and hip-hop, equal parts up and down tempo. And over the course of their nine studio albums, they’ve worked with an extraordinary array of collaborations and guest vocalists including Richie Havens, Angie Stone, Candi Staton, Neneh Cherry, PNAU’s Nick Littl...
2022-04-15
28 min
The bluedot Podcast
In Conversation with Lanterns on the Lake
Lanterns on the Lake are a Mercury Prize nominated, critically acclaimed and adored Newcastle band, whose work has seen them collaborate with the Royal Northern Sinfonia and tour with the likes of Explosion In The Sky, Yann Tiersen and Low.We’re so excited to welcome Lanterns on the Lake to join Saturday’s Lovell Stage line-up at Bluedot 2022 this July. And Hazel from the band joins us now for a special In Conversation. Hosted on Acast. See acast.com/privacy for more information.
2022-04-01
25 min
The bluedot Podcast
In Conversation with Mogwai
They’re an award-winning, legendary Scottish band whose twenty-five year career has produced ten incredible albums, unique collaborations with the likes of Clint Mansell and Nine Inch Nails, and a prolific history of soundtracking for films and tv, including the 2006 documentary film Zidane: A 21st Century Portrait and Mark Cousins’ 2015 piece Atomic.Following the Mercury Prize-nominated As The Love Continues, their tenth album and their first to hit number one in the Album Charts, we’re thrilled to welcome Mogwai to headline bluedot 2022 this July. And Stuart Braithwaite joins us for a special In Conversation....
2022-03-18
24 min
The bluedot Podcast
Jill Tarter and Ana Matronic
Chair of SETI, one of Discover magazine’s most important women in science, and the inspiration behind Jodie Foster’s Ellie in Contact, Jill Tarter is an icon of astronomy. bluedot favourite Ana Matronic met Jill via Skype to explore the cosmos and beyond. Part of bluedot's A Weekend In Outer Space, July 2020. Hosted on Acast. See acast.com/privacy for more information.
2021-01-15
53 min
The bluedot Podcast
Diversity and Representation in S.T.E.M.
Angela Saini and Tana Joseph join Jim al-Khalili for an exploration of diversity and representation in the world of bluedot, how STEM can learn from other disciplines, and what the future of our institutions looks like. Part of bluedot's A Weekend In Outer Space, July 2020. Hosted on Acast. See acast.com/privacy for more information.
2021-01-15
55 min
The bluedot Podcast
The Hitchhikers Guide to the Galaxy Reunion
Hitchhikers archivist Kevin Davies welcomes an all-star panel of Hitchhikers legends to commemorate Douglas Adams' iconic series, featuring John Lloyd, James Thrift, Sandra Dickinson, Philip Pope and Toby Longworth. Part of bluedot's A Weekend In Outer Space, July 2020. Hosted on Acast. See acast.com/privacy for more information.
2021-01-15
49 min
The bluedot Podcast
Ann Druyan with Brian Cox and Robin Ince
We join forces with our friends at The Cosmic Shambles Network for a very special 'In Conversation' with the legendary Ann Druyan, hosted by Professor Brian Cox and Robin Ince. Part of bluedot's A Weekend In Outer Space, July 2020. Hosted on Acast. See acast.com/privacy for more information.
2021-01-15
1h 01
The bluedot Podcast
Welcome
Welcome to The bluedot Podcast. Hosted on Acast. See acast.com/privacy for more information.
2021-01-15
00 min