Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Automatic Network Adaptation for Ultra-Low Uniform-Precision Quantization

EDGE AI FOUNDATION via YouTube

Overview

Coursera Plus Monthly Sale: All Certificates & Courses 40% Off!
Learn about a research symposium presentation exploring automatic network adaptation techniques for ultra-low uniform-precision quantization in neural networks. Discover how neural channel expansion can optimize network structures by selectively expanding channels in quantization-sensitive layers while maintaining hardware constraints. Explore the methodology that achieved record-breaking Top-1/Top-5 accuracy for 2-bit ResNet50 with reduced FLOPs and parameter size. Follow along as Tae-Ho KIM, Co-founder and technical fellow at Nota AI, delves into the research background, proposed algorithm, impact of channel expansion, search space considerations, experimental results, and qualitative analysis of this innovative approach to neural network optimization.

Syllabus

Introduction
Summary
Research Background
Proposed Algorithm
Impact of Channel Expansion
Search Space
Experiments
Qualitative Analysis
Conclusion

Taught by

EDGE AI FOUNDATION

Reviews

Start your review of Automatic Network Adaptation for Ultra-Low Uniform-Precision Quantization

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.