Overview
Learn how to enhance Large Language Models (LLMs) through Retrieval-Augmented Generation (RAG) in this technical tutorial featuring experts from Intel, Microsoft, and Canonical. Discover the implementation of RAG using Charmed OpenSearch for comprehensive services including data and model ingestion, vector database management, retrieval and ranking, and LLM connectivity. Explore the deployment architecture on Microsoft Azure Cloud and understand how Intel AVX® Acceleration powers vector search capabilities for faster, high-throughput RAG processing. Master practical techniques for building more accurate, context-aware, and informative AI systems through hands-on demonstrations and expert insights from Akash Shankaran, Ron Abellera, and Juan Pablo Norena.
Syllabus
Build and deploy LLMs | Retrieval Augmented Generation | Data & AI Masters | Intel | Microsoft Azure
Taught by
Canonical Ubuntu