Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Independent

Introduction to AI in the Data Center

Nvidia via Independent

Overview

Welcome to the Introduction to AI in the Data Center Course!
As you know, Artificial Intelligence, or AI, is transforming society in many ways.
From speech recognition to improved supply chain management, AI technology provides enterprises with the compute power, tools, and algorithms their teams need to do their life’s work.

But how does AI work in a Data Center? What hardware and software infrastructure are needed?
These are some of the questions that this course will help you address.
This course will cover an introduction to concepts and terminology that will help you start the journey to AI and GPU computing in the data center.

You will learn about:
* AI and AI use cases, Machine Learning, Deep Learning, and how training and inference happen in a Deep Learning Workflow.
* The history and architecture of GPUs, how they differ from CPUs, and how they are revolutionizing AI.
* Deep learning frameworks, AI software stack, and considerations when deploying AI workloads on a data center on prem, in the cloud, on a hybrid model, or on a multi-cloud environment. ​
* Requirements for multi-system AI clusters​​, considerations for infrastructure planning, including servers, networking, and storage and tools for cluster management, monitoring and orchestration.

This course is part of the preparation material for the NVIDIA Certified Associate - ”AI in the Data Center” certification.
This certification will take your expertise to the next level and support your professional development.

Who should take this course?
* IT Professionals
* System and Network Administrators
* DevOps
* Data Center Professionals

No prior experience required.
This is an introduction course to AI and GPU computing in the data center.

To learn more about NVIDIA’s certification program, visit:
https://academy.nvidia.com/en/nvidia-certified-associate-data-center/

So let's get started!

Syllabus

  • Introduction to GPU Computing | NVIDIA Training
    • In this module you will see AI use cases in different industries, the concepts of AI, Machine Learning (ML) and Deep Learning (DL), understand what a GPU is, the differences between a GPU and a CPU.
      You will learn about the software ecosystem that has allowed developers to make use of GPU computing for data science and considerations when deploying AI workloads on a data center on prem, in the cloud, on a hybrid model, or on a multi-cloud environment.
  • Rack Level Considerations | NVIDIA Training
    • In this module we will cover rack level considerations when deploying AI clusters.
      You will learn about requirements for multi-system AI clusters, storage and networking considerations for such deployments, and an overview of NVIDIA reference architectures, which provide best practices to design systems for AI workloads.
  • Data Center Level Considerations | NVIDIA Training
    • This unit covers  data center level considerations  when deploying AI clusters, such as infrastructure provisioning and workload management, orchestration and job scheduling, tools for cluster management and monitoring, and power and cooling considerations for data center deployments. 

      Lastly, you will learn about AI infrastructure offered by NVIDIA partners through the DGX-ready data center colocation program.
  • Course Completion Quiz - Introduction to AI in the Data Center
    • It is highly recommended that you complete all the course activities before you begin the quiz.
      Good luck!

Taught by

NVIDIA Training Support

Reviews

4.7 rating at Independent based on 46 ratings

Start your review of Introduction to AI in the Data Center

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.