Listen

Description

Patrick and Jason explain transformers and large language models from the ground up. They cover attention, encoders and decoders, self-supervised learning, RLHF, and the key architectural ideas that made modern LLMs possible.