Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don't lead to our extinction.
TED Talks Daily is nominated for the Signal Award for Best Conversation Starter Podcast. Vote here!
Interested in learning more about upcoming TED events? Follow these links:
TEDNext: ted.com/futureyou
TEDAI San Francisco: ted.com/ai-sf
Hosted on Acast. See acast.com/privacy for more information.