Sensitivity Analysis of Hyperparameters in Deep Neural-Network Pruning
EDGE AI FOUNDATION via YouTube
Overview
Learn about the impact of hyperparameter sensitivity in deep neural network pruning through this technical conference talk from tinyML EMEA. Explore how structured pruning methods affect model deployment on resource-constrained devices, with a focus on reducing inference latency, memory footprint, and energy consumption. Discover state-of-the-art hyperparameter optimization techniques, including Bayesian optimization and BOHB, and understand their application in training various models on public datasets. Follow along as the speaker demonstrates how to analyze hyperparameter-performance distributions before and after pruning, quantifying sensitivity through distance metrics. Gain valuable insights into maximizing network performance on resource-limited hardware and deepen your understanding of neural network generalization principles through practical examples and experimental results.
Syllabus
Introduction
Pruning
Introduction to pruning
Experimental pipeline
Recap
Taught by
EDGE AI FOUNDATION