Listen

Description

This live session will focus on the details of music generation using the Tensorflow library. The goal is for you to understand the details of how to encode music, feed it to a well tuned model, and use it to generate really cool sounds. And I'm going to NOT use Google Hangouts, instead I'll do this with a green screen and a DSLR camera :)

Code for this video:
https://github.com/llSourcell/music_demo_live/

Please subscribe! And like. And comment. That's what keeps me going.

My Udacity course is open for enrollments until this Saturday at midnight:
https://www.udacity.com/course/deep-learning-nanodegree-foundation--nd101

More Learning Resources:
http://www.asimovinstitute.org/analyzing-deep-learning-tools-music/
http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/
https://github.com/hexahedria/biaxial-rnn-music-composition
http://www.hexahedria.com/2016/08/08/summer-research-on-the-hmc-intelligent-music-software-team
https://magenta.tensorflow.org/
https://github.com/farizrahman4u/seq2seq
http://stackoverflow.com/questions/14448380/how-do-i-read-a-midi-file-change-its-instrument-and-write-it-back
https://github.com/vishnubob/python-midi

Join us in the Wizards Slack channel:
http://wizards.herokuapp.com/

Please support me on Patreon:
https://www.patreon.com/user?u=3191693

Streaming Live from UploadVR's Studio in San Francisco!: https://www.youtube.com/uploadvr
Follow me:
Twitter: https://twitter.com/sirajraval
Facebook: https://www.facebook.com/sirajology Instagram: https://www.instagram.com/sirajraval/