Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

University of Central Florida

Regularization of Big Neural Networks

University of Central Florida via YouTube

Overview

This course covers the regularization techniques for big neural networks. The learning outcomes include understanding overfitting, training with dropout, theoretical analysis of dropconnect, and methods for test time. Students will learn skills such as varying network size, analyzing convergence rates, and implementing deconvolutional networks. The teaching method involves theoretical analysis, practical examples, and experimental results. This course is intended for individuals interested in deep learning, neural networks, and computer vision.

Syllabus

Intro
Big Neural Nets
Big Models Over-Fitting
Training with DropOut
DropOut/Connect Intuition
Theoretical Analysis of DropConnect
MNIST Results
Varying Size of Network
Varying Fraction Dropped
Comparison of Convergence Rates
Limitations of DropOut/Connect
Stochastic Pooling
Methods for Test Time
Varying Size of Training Set
Convergence / Over-Fitting
Street View House Numbers
Deconvolutional Networks
Recap: Sparse Coding (Patch-based)
Reversible Max Pooling
Single Layer Cost Function
Single Layer Inference
Effect of Sparsity
Effect of Pooling Variables
Talk Overview
Stacking the Layers
Two Layer Example
Link to Parts and Structure Models
Caltech 101 Experiments
Layer 2 Filters
Classification Results: Caltech 101
Deconvolutional + Convolutional
Summary

Taught by

UCF CRCV

Reviews

Start your review of Regularization of Big Neural Networks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.