Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.


Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization

DeepLearning.AI and Stanford University via Coursera


Limited-Time Offer: Up to 75% Off Coursera Plus!
7000+ certificate courses from Google, Microsoft, IBM, and many more.
In the second course of the Deep Learning Specialization, you will open the deep learning black box to understand the processes that drive performance and generate good results systematically. By the end, you will learn the best practices to train and develop test sets and analyze bias/variance for building deep learning applications; be able to use standard neural network techniques such as initialization, L2 and dropout regularization, hyperparameter tuning, batch normalization, and gradient checking; implement and apply a variety of optimization algorithms, such as mini-batch gradient descent, Momentum, RMSprop and Adam, and check for their convergence; and implement a neural network in TensorFlow. The Deep Learning Specialization is our foundational program that will help you understand the capabilities, challenges, and consequences of deep learning and prepare you to participate in the development of leading-edge AI technology. It provides a pathway for you to gain the knowledge and skills to apply machine learning to your work, level up your technical career, and take the definitive step in the world of AI.


  • Practical Aspects of Deep Learning
    • Discover and experiment with a variety of different initialization methods, apply L2 regularization and dropout to avoid model overfitting, then apply gradient checking to identify errors in a fraud detection model.
  • Optimization Algorithms
    • Develop your deep learning toolbox by adding more advanced optimizations, random minibatching, and learning rate decay scheduling to speed up your models.
  • Hyperparameter Tuning, Batch Normalization and Programming Frameworks
    • Explore TensorFlow, a deep learning framework that allows you to build neural networks quickly and easily, then train a neural network on a TensorFlow dataset.

Taught by

Andrew Ng


5.0 rating, based on 3 Class Central reviews

4.9 rating at Coursera based on 62930 ratings

Start your review of Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization

  • Profile image for Nattapon Sub-Anake
    Nattapon Sub-Anake
    I finished this second deep learning course and I like it very much. I am looking forward for more courses in this deep learning series. Andrew is doing a great job here.
  • Silveira Homero
    This is a follow up course to Neural Networks and Deep Learning so you must start with the latter. The practical side of the teaching was very interesting.
  • Profile image for Raivis Joksts
    Raivis Joksts
    A good look at most important parameters that impact DL models. Again it is math heavy, but it' s ok to just understand basic logic behind it. Has some TensorFlow practical tasks, which are easy to complete, but you may wish to read up on TF on your own to understand how it works in general, especially if you have used other DLK framework before - like PyTorch, for example.

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.