DeepMind released an optimization strategy that could become the most popular approach for training very deep neural networks, even more so than backpropagation. I don't think it got enough love, so i'm going to explain how it works myself and why i think it's so cool. Already know how backpropagation works? Skip to 14:10
Code for this video:
https://github.com/llSourcell/synthetic_gradients_explained
Please Subscribe! And like. And comment. Thats what keeps me going.
Follow me on:
Twitter: https://twitter.com/sirajraval
Facebook: https://www.facebook.com/sirajology/
Snapchat: @llSourcell
More learning resources:
https://iamtrask.github.io/2017/03/21/synthetic-gradients/
https://arxiv.org/abs/1703.00522
https://deepmind.com/blog/decoupled-neural-networks-using-synthetic-gradients/
Join us in the Wizards Slack channel:
http://wizards.herokuapp.com/
And please support me on Patreon: https://www.patreon.com/user?u=3191693 Instagram: https://www.instagram.com/sirajraval/