Listen

Description

Today’s guest of my podcast is Mikhail Samin, AI safety researcher, effective altruist, creator of AudD, ex-head of Russian pastafarian church.

In this episode we are discussing the problem of AI alignment (unfriendly AI), ChatGPT, and AI safety.

Links to explore:

https://www.agisafetyfundamentals.com/ 

https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg 

https://aisafety.info/ 

My Twitter: https://twitter.com/mustreader

My TikTok: https://www.tiktok.com/@gregmustreader

00:00 – Intro

1:07 — AI alignment problem

3:25 — Why Michael donated that much

6:45 — What can go wrong with AI

9:35 — What’s AGI and ASI

13:25 — AI development

26:01 — How to stop the AI apocalypse

36:10 — Will we create safeguards from AI 

40:54 — Anything optimistic?

Collabs: mazdrid@gmail.com

#AI #AIalignment #chatgpt