Overview
This course explores The Lottery Ticket Hypothesis, which focuses on finding sparse, trainable neural networks through pruning techniques. The learning outcomes include understanding how to reduce parameter counts in neural networks, improve computational performance, and train efficient subnetworks effectively. The course covers topics such as network pruning, iterative magnitude pruning, training pruned networks, and implications for machine learning methods. The intended audience for this course includes individuals interested in neural networks, machine learning, and computational performance optimization.
Syllabus
Intro
Neural Networks are Large
Background: Network Pruning
Training is Expensive
Research Question
Motivation and Questions
Training Pruned Networks
Iterative Magnitude Pruning
Results
The Lottery Ticket Hypothesis
Broader Questions
Larger-Scale Settings
Scalability Challenges
Linear Mode Connectivity
Instability
Rewinding IMP Works
Takeaways
Our Current Understanding
Implications and Follow-Up
Taught by
MIT Embodied Intelligence