Overview
This course covers the following learning outcomes and goals: understanding the basics of Tensor Processing Units (TPUs), learning how to accelerate deep learning using TPUs, troubleshooting and optimizing for TPUs, and exploring the process of building a TPU. The course teaches skills such as working with TFRecords, utilizing systolic arrays and bfloat16, and transitioning from AlphaGo to Speech Recognition. The teaching method includes notebook walkthroughs, practical demonstrations, and interactive sessions. The intended audience for this course is data science professionals interested in accelerating their deep learning models using TPUs.
Syllabus
TPU Notebook Walkthrough: Introduction to TFRecords | Kaggle.
Learn With Me: Getting Started with Tensor Processing Units (TPUs) | Kaggle.
TPUs, systolic arrays, and bfloat16: accelerate your deep learning | Kaggle.
TPUs: AlphaGo to Speech Recognition | Kaggle.
Getting Curious: What it takes to build a TPU | Kaggle.
Accelerator Power Hour for data science professionals with Kaggle Grandmasters (Cloud AI Huddle).
Learn With Me: Troubleshooting and Optimizing for Tensor Processing Units (TPUs) | Kaggle.
Taught by
Kaggle