Listen

Description

In this episode Dr West explains alignment faking in the context of the emerging morality of Large Language Models