In this episode we discuss the paper "Training language models to follow instructions with human feedback" by Ouyang et al (2022). We discuss the RLHF paradigm and how important RL is to tuning GPT.
Want to check another podcast?
Enter the RSS feed of a podcast, and see all of their public statistics.