Listen

Description

Large Language Models might sound smart, but can they predict what happens when a cat sees a cucumber? In this episode, host Emily Laird throws LLMs into the philosophical ring with World Models, AI systems that learn from watching, poking, and pushing stuff around (kind of like toddlers). Meta’s Yann LeCun isn’t impressed by chatbots, and honestly, he might have a point. We break down why real intelligence might need both brains and brawn—or at least a good sense of gravity.

Join the AI Weekly Meetups

Connect with Us: If you enjoyed this episode or have questions, reach out to Emily Laird on LinkedIn. Stay tuned for more insights into the evolving world of generative AI. And remember, you now know more about world models vs LLMs and that's pretty cool.
 

Connect with Emily Laird on LinkedIn