In university, I had a math teacher who would yell at me, “Mr. Görner, integrals are taught in kindergarten!” I get the same feeling today, when I read most free online resources dedicated to deep learning. My kindergarten education was apparently severely lacking in “dropout lullabies,” “cross-entropy riddles,” and “relu-gru-rnn-lstm monster stories.” Yet, these fundamental concepts are taken for granted by many, if not most, authors of online educational resources about deep learning.
To help more developers embrace deep-learning techniques, without the need to earn a Ph.D., I have attempted to flatten the learning curve by building a short crash-course (3 hours total). The course is focused on a few basic network architectures, including dense, convolutional and recurrent networks, and training techniques such as dropout or batch normalization. (This course was initially presented at the Devoxx conference in Antwerp, Belgium, in November 2016.) By watching the recordings of the course and viewing the annotated slides, you can learn how to solve a couple of typical problems with neural networks and also pick up enough vocabulary and concepts to continue your deep learning self-education — for example, by exploring TensorFlow resources. (TensorFlow is Google’s internally developed framework for deep learning, which has been growing in popularity since it was released as open source in 2015.)
Chapter 1: Introduction; handwritten digits recognition (the simplest neural network) (Video | Slides) Chapter 2: Ingredients for a tasty neural network + TensorFlow basics (Video | Slides) Chapter 3: More cooking tools: multiple layers, relu, dropout, learning rate decay (Video | Slides) Chapter 4: Convolutional networks (Video | Slides) Chapter 5: Batch normalization (Video | Slides) Chapter 6: the high level API for TensorFlow (Video | Slides) Chapter 7: Recurrent neural networks (and fun with Shakespeare) (Video | Slides) Chapter 8: Google Cloud Machine Learning platform (Video | Slides)