Automatic Network Adaptation for Ultra-Low Uniform-Precision Quantization
EDGE AI FOUNDATION via YouTube
Overview
Learn about a research symposium presentation exploring automatic network adaptation techniques for ultra-low uniform-precision quantization in neural networks. Discover how neural channel expansion can optimize network structures by selectively expanding channels in quantization-sensitive layers while maintaining hardware constraints. Explore the methodology that achieved record-breaking Top-1/Top-5 accuracy for 2-bit ResNet50 with reduced FLOPs and parameter size. Follow along as Tae-Ho KIM, Co-founder and technical fellow at Nota AI, delves into the research background, proposed algorithm, impact of channel expansion, search space considerations, experimental results, and qualitative analysis of this innovative approach to neural network optimization.
Syllabus
Introduction
Summary
Research Background
Proposed Algorithm
Impact of Channel Expansion
Search Space
Experiments
Qualitative Analysis
Conclusion
Taught by
EDGE AI FOUNDATION