Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Building and Deploying LLMs with Retrieval Augmented Generation

Canonical Ubuntu via YouTube

Overview

Coursera Plus Monthly Sale: All Certificates & Courses 40% Off!
Learn how to enhance Large Language Models (LLMs) through Retrieval-Augmented Generation (RAG) in this technical tutorial featuring experts from Intel, Microsoft, and Canonical. Discover the implementation of RAG using Charmed OpenSearch for comprehensive services including data and model ingestion, vector database management, retrieval and ranking, and LLM connectivity. Explore the deployment architecture on Microsoft Azure Cloud and understand how Intel AVX® Acceleration powers vector search capabilities for faster, high-throughput RAG processing. Master practical techniques for building more accurate, context-aware, and informative AI systems through hands-on demonstrations and expert insights from Akash Shankaran, Ron Abellera, and Juan Pablo Norena.

Syllabus

Build and deploy LLMs | Retrieval Augmented Generation | Data & AI Masters | Intel | Microsoft Azure

Taught by

Canonical Ubuntu

Reviews

Start your review of Building and Deploying LLMs with Retrieval Augmented Generation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.