Look for any podcast host, guest or anyone

Listen

Description

Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don't lead to our extinction.

For a chance to give your own TED Talk, fill out the Idea Search Application: ted.com/ideasearch.


Interested in learning more about upcoming TED events? Follow these links:

TEDNext: ted.com/futureyou

TEDSports: ted.com/sports

TEDAI Vienna: ted.com/ai-vienna

TEDAI San Francisco: ted.com/ai-sf


Hosted on Acast. See acast.com/privacy for more information.