Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Inference with Torch-TensorRT Deep Learning Prediction for Beginners - CPU vs CUDA vs TensorRT

Python Simplified via YouTube

Overview

This course covers everything you need to know to use Torch-TensorRT on Nvidia GPUs. It will step through creating and setting up a Docker container, installing Nvidia Container Toolkit and Nvidia Docker 2, loading ResNet50 and a sample image in Pytorch and running image transforms, training with ResNet50, using the softmax function, and mapping ImageNet class number to names by predicting the top 5 classes of the sample image with topk. You’ll also learn how to use benchmark functions to speed test, run CPU and CUDA benchmarks, trace models, convert traced models to Torch-TensorRT models, and run TensorRT benchmarks, as well as how to download the Jupyter Notebook.

Syllabus

- intro
- clone Torch-TensorRT
- install and setup Docker
- install Nvidia Container Toolkit & Nvidia Docker 2
- Torch-TensorRT container option #1
- Torch-TensorRT Nvidia NGC container option #2
- import Pytorch
- load ResNet50
- load sample image
- sample image transforms
- batch size
- prediction with ResNet50
- softmax function
- ImageNet class number to name mapping
- predict top 5 classes of sample image topk
- speed test benchmark function
- CPU benchmarks
- CUDA benchmarks
- trace model
- convert traced model into a Torch-TensorRT model
- TensorRT benchmarks
- download Jupyter Notebook
- HOW DID I MISS THIS???
- thanks for watching!

Taught by

Python Simplified

Reviews

Start your review of Inference with Torch-TensorRT Deep Learning Prediction for Beginners - CPU vs CUDA vs TensorRT

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.