Overview
Syllabus
00:00:01 - Intro & Webinar Overview
00:03:19 - The Cost Challenge of LLMs
00:06:17 - Complexity in Real-World AI Tasks
00:09:34 - Evaluating Domain-Specific Accuracy
00:12:41 - Ranking and Response Generation
00:15:58 - Using DistilBERT and Model Combinations
00:19:25 - Tools for Efficient LLM Distillation
00:22:45 - Architecture of Distilled Models
00:26:11 - Presentation Format and Flow
00:29:34 - Use of Labeled and Weak Data
00:32:57 - Tradeoffs in Deploying Large Models
00:36:15 - Case Study: Enterprise Deployment
00:39:28 - Generalization vs Specialization
00:42:52 - Model Size and Performance Comparison
00:46:20 - Final Thoughts & Q&A
Taught by
Snorkel AI