Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Amazon Web Services

Hands-on Machine Learning with AWS and NVIDIA

Amazon Web Services and Nvidia via Coursera

This course may be unavailable.

Overview

Machine learning (ML) projects can be complex, tedious, and time consuming. AWS and NVIDIA solve this challenge with fast, effective, and easy-to-use capabilities for your ML project.

This course is designed for ML practitioners, including data scientists and developers, who have a working knowledge of machine learning workflows. In this course, you will gain hands-on experience on building, training, and deploying scalable machine learning models with Amazon SageMaker and Amazon EC2 instances powered by NVIDIA GPUs. Amazon SageMaker helps data scientists and developers prepare, build, train, and deploy high-quality ML models quickly by bringing together a broad set of capabilities purpose-built for ML. Amazon EC2 instances powered by NVIDIA GPUs along with NVIDIA software offer high performance GPU-optimized instances in the cloud for efficient model training and cost effective model inference hosting.

In this course, you will first get an overview of Amazon SageMaker and NVIDIA GPUs. Then, you will get hands-on, by running a GPU powered Amazon SageMaker notebook instance. You will then learn how to prepare a dataset for model training, build a model, execute model training, and deploy and optimize the ML model. You will also learn, hands-on, how to apply this workflow for computer vision (CV) and natural language processing (NLP) use cases. After completing this course, you will be able to build, train, deploy, and optimize ML workflows with GPU acceleration in Amazon SageMaker and understand the key Amazon SageMaker services applicable to computer vision and NLP ML tasks.

Syllabus

  • Introduction to Amazon SageMaker and NVIDIA GPUs
    • In this module, you will learn about the purpose-built tools available within Amazon SageMaker for modern machine learning (ML). This includes a tour of the Amazon SageMaker Studio IDE that can be used to prepare, build, train and tune, and deploy and manage your own ML models. Then you will learn how to use Amazon SageMaker classic notebooks and Amazon SageMaker Studio notebooks to develop natural language processing (NLP), computer vision (CV), and other ML models using NVIDIA RAPIDS. You will also dive deep into NVIDIA GPUs, the NVIDIA NGC Catalog, and instances available on AWS for ML.
  • GPU Accelerated Machine Learning Workflows with RAPIDS and Amazon SageMaker
    • In this module, you will apply your knowledge of NVIDIA GPUs and Amazon SageMaker. You will learn a background on GPU accelerated machine learning and perform the steps required to setup Amazon SageMaker. You will then learn about data acquisition and data transformation, moving on to model design and training, and finish up by evaluating hyperparameter optimization, AutoML, and GPU accelerated inferencing.
  • Computer Vision
    • In this module you will learn about the application of deep learning for Computer Vision (CV). As humans, nature devoted half of our brains to visual processing, making it critical to how we perceive the world. Endowing machines with sight has been a challenging endeavor, but advancements in compute, algorithms, and data quality have made computer vision more accessible than ever before. From mobile cameras to industrial mechanic lenses, biological labs to hospital imaging, and self-driving cars to security cameras, data in pixel format is one of the most valuable types of data for consumers and companies. In this module, you will explore common CV applications, and you will learn how to build an end-to-end object detection model on Amazon SageMaker using NVIDIA GPUs.
  • Natural Language Processing
    • In this module, you will learn about the application of deep learning technologies to the problem of language understanding. What does it mean to understand languages? What is language modeling? What is the BERT language model, and why are such language models used in many popular services like search, office productivity software, and voice agents? Are NVIDIA GPUs a fast and cost-efficient platform to train and deploy NLP Models? In this section, you will find answers to all those questions and more. Whether you are an experienced ML engineer considering implementation or a developer wanting to learn to deploy a language understanding model like BERT quickly, this module is for you.

Taught by

Anish Mohan, Adam Tetelman, Pavan Kumar Sunder, Isaac Privitera and Abhilash Somasamudramath

Reviews

Start your review of Hands-on Machine Learning with AWS and NVIDIA

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.