This 3-hour course (video + slides) offers developers a quick introduction to deep-learning fundamentals, with some TensorFlow thrown into the bargain.
Deep learning (aka neural networks) is a popular approach to building machine-learning models that is capturing developer imagination. If you want to acquire deep-learning skills but lack the time, I feel your pain.
In university, I had a math teacher who would yell at me, “Mr. Görner, integrals are taught in kindergarten!” I get the same feeling today, when I read most free online resources dedicated to deep learning. My kindergarten education was apparently severely lacking in “dropout lullabies,” “cross-entropy riddles,” and “relu-gru-rnn-lstm monster stories.” Yet, these fundamental concepts are taken for granted by many, if not most, authors of online educational resources about deep learning.
To help more developers embrace deep-learning techniques, without the need to earn a Ph.D., I have attempted to flatten the learning curve by building a short crash-course (3 hours total). The course is focused on a few basic network architectures, including dense, convolutional and recurrent networks, and training techniques such as dropout or batch normalization. (This course was initially presented at the Devoxx conference in Antwerp, Belgium, in November 2016.) By watching the recordings of the course and viewing the annotated slides, you can learn how to solve a couple of typical problems with neural networks and also pick up enough vocabulary and concepts to continue your deep learning self-education — for example, by exploring TensorFlow resources. (TensorFlow is Google’s internally developed framework for deep learning, which has been growing in popularity since it was released as open source in 2015.)