This course extends our existing background in Deep Learning to state of the art techniques in audio, image and text modeling. We'll see how dilated convolutions can be used to model long term temporal dependencies efficiently using a model called WaveNet. We'll also see how to inspect the representations in deep networks using a deep generator network, leading to some of the strongest insights into deep networks and the representations they learn. We'll then switch gears to one of the most exciting directions in Deep Learning thus far: Reinforcement Learning. We'll take a brief tour of this fascinating topic and explore toolkits released by OpenAI, DeepMind, and Microsoft. Finally, we're teaming up with Google Brain's Magenta Lab for our last session on Music and Art Generation. We'll explore Magenta's libraries using RNNs and Reinforcement Learning to create generative and improvised music.
What Students Are Saying:
"This course lets you explore things like audio synthesis, music generation and natural language processing using the Tensorflow skills learned in the previous two courses. It is very open-ended. Parag and the Google Magenta team give some great overviews and then set you free to explore each space further. Highly recommended!"