Listen

Description

Is this a wake-up call for anyone who believes the dangers of AI are exaggerated?

Today on Digital Disruption, we’re joined by Roman Yampolskiy, a leading writer and thinker on AI safety, and associate professor at the University of Louisville. He was recently featured on podcasts such as PowerfulJRE by Joe Rogan.

Roman is a leading voice in the field of Artificial Intelligence Safety and Security. He is the author of several influential books, including AI: Unexplainable, Unpredictable, Uncontrollable. His research focuses on the critical risks and challenges posed by advanced AI systems. A tenured professor in the Department of Computer Science and Engineering at the University of Louisville, he also serves as the founding director of the Cyber Security Lab.

Roman sits down with Geoff to discuss one of the most pressing issues of our time: the existential risks posed by AI and superintelligence. He shares his prediction that AI could lead to the extinction of humanity within the next century. They dive into the complexities of this issue, exploring the potential dangers that could arise from both AI’s malevolent use and its autonomous actions. Roman highlights the difference between AI as a tool and as a sentient agent, explaining how superintelligent AI could outsmart human efforts to control it, leading to catastrophic consequences. The conversation challenges the optimism of many in the tech world and advocates for a more cautious, thoughtful approach to AI development.

In this episode:

00:00 Intro

00:45 Dr. Yampolskiy's prediction: AI extinction risk

02:15 Analyzing the odds of survival

04:00 Malevolent use of AI and superintelligence

06:00 Accidental vs. deliberate AI destruction

08:10 The dangers of uncontrolled AI

10:00 The role of optimism in AI development

12:00 The need for self-interest to slow down AI development

15:00 Narrow AI vs. Superintelligence

18:30 Economic and job displacement due to AI

22:00 Global competition and AI arms race

25:00 AI’s role in war and suffering

30:00 Can we control AI through ethical governance?

35:00 The singularity and human extinction

40:00 Superintelligence: How close are we?

45:00 Consciousness in AI

50:00 The difficulty of programming suffering in AI

55:00 Dr. Yampolskiy’s approach to AI safety

58:00 Thoughts on AI risk

Connect with Roman:

Website: https://www.romanyampolskiy.com/

LinkedIn: https://www.linkedin.com/in/romanyam/

X: https://x.com/romanyam

Visit our website: https://www.infotech.com/

Follow us on YouTube: https://www.youtube.com/@InfoTechRG