Patrick and Jason explain transformers and large language models from the ground up. They cover attention, encoders and decoders, self-supervised learning, RLHF, and the key architectural ideas that made modern LLMs possible.
Want to check another podcast?
Enter the RSS feed of a podcast, and see all of their public statistics.