Fine-Tune, Serve and Scale Workflows for Large Language Models
Overview
This one-hour webinar explores how to optimize AI workflows using modern MLOps and LLMOps tools. Learn the fundamentals of machine learning operations pipelines, how to fine-tune a Hugging Face LLM model for sequence classification on unstructured data, build scalable and reproducible production-grade workflows with Union.ai, and deploy fine-tuned models in real-time Streamlit applications. The session includes a practical demonstration of fine-tuning BERT, hands-on setup instructions, examples of simple and advanced workflows, and deployment strategies. Designed for AI practitioners looking to streamline their LLM development process, the webinar covers best practices for creating reliable frameworks that increase efficiency and enhance reproducibility in AI model training and deployment.
Syllabus
0:00 Introduction
1:45 Speaker Background
2:38 MLOps & LLMOps Overview
4:47 Why Workflows Matter
6:24 Best Practices for MLOps
9:33 Demo: Fine-Tuning BERT
16:15 Hands-On Setup
20:52 Simple Workflow Example
26:35 Advanced Pipeline Walkthrough
38:00 Deploying the Model
46:09 Q&A Highlights
Taught by
Data Science Dojo