Look for any podcast host, guest or anyone

Listen

Description

We have all seen how AI models fail, sometimes in spectacular ways. Yaron Singer joins us in this episode to discuss model vulnerabilities and automatic prevention of bad outcomes. By separating concerns and creating a “firewall” around your AI models, it’s possible to secure your AI workflows and prevent model failure.

Join the discussion

Changelog++ members get a bonus 2 minutes at the end of this episode and zero ads. Join today!

Sponsors:

Featuring:

Show Notes:

Something missing or broken? PRs welcome!