Overview
This course teaches how to speed up Neural Architecture Search (NAS) by using statistics of the Jacobian around data points to estimate the performance of proposed architectures at initialization, eliminating the need for training hundreds or thousands of models. The course covers topics such as linearization around data points, linearization statistics, NAS benchmarks, experiments, and concludes with insights and comments. The intended audience for this course includes individuals interested in AI research, machine learning, and optimizing neural network design processes.
Syllabus
- Intro & Overview
- Neural Architecture Search
- Controller-based NAS
- Architecture Search Without Training
- Linearization Around Datapoints
- Linearization Statistics
- NAS-201 Benchmark
- Experiments
- Conclusion & Comments
Taught by
Yannic Kilcher