Imagine what happens if you use the wrong LLM, including a malicious model placed there to create mischief or crime. How do you know? Jason proposes that, the same way we sign our code, we should be signing our AI models as well.
Want to check another podcast?
Enter the RSS feed of a podcast, and see all of their public statistics.