![](https://ccweb.imgix.net/https%3A%2F%2Fwww.classcentral.com%2Fimages%2Ficon-black-friday.png?auto=format&ixlib=php-4.1.0&s=fe56b83c82babb2f8fce47a2aed2f85d)
Overview
![](https://ccweb.imgix.net/https%3A%2F%2Fwww.classcentral.com%2Fimages%2Ficon-black-friday.png?auto=format&ixlib=php-4.1.0&s=fe56b83c82babb2f8fce47a2aed2f85d)
This course discusses how to save 95% of edge power with sparsity to enable tiny machine learning (tinyML). The learning outcomes include understanding the different types of sparsity (time, space, connectivity, activation) and how they can be exploited to reduce computation needs for low latency and low power edge processing. The course teaches about the GrAI Core architecture and its event-based paradigm to maximize sparsity utilization. The teaching method involves a webcast presentation with a focus on practical applications and implications for tinyML tasks. The intended audience for this course includes individuals interested in edge processing, tiny machine learning, and optimizing power consumption for ML inference at the edge.
Syllabus
Intro
About Jon Tapson
Edge workloads are different
Edge data is massive
Speech waveforms
What is sparsity
Deep neural networks
Fanout
Basic CNN
Typical gains
Neural Network Accelerator
How it works
Events
Use cases
Software stack
Runtime support
Sparsity performance
Summary
Questions
Conclusion
Edge Impulse
Sponsor
Next talk
Thanks
Taught by
tinyML