Kyle and Linhda discuss attention and the transformer - an encoder/decoder architecture that extends the basic ideas of vector embeddings like word2vec into a more contextual use case.
Want to check another podcast?
Enter the RSS feed of a podcast, and see all of their public statistics.