Overview
In this 13-minute conference talk from Conf42 LLMs 2025, Hilik Paz explores the critical process of evaluating LLM evaluation systems. Learn about the fundamentals of evals in generative AI, the unique challenges faced when testing AI systems, and practical implementation strategies. Follow along with a real-world technical support bot example that demonstrates different evaluation types and methodologies. Watch a live experiment with actual results that showcases how to properly assess AI performance. The presentation provides a structured approach to creating effective evaluation frameworks, helping developers and organizations ensure their LLM applications meet quality standards.
Syllabus
00:00 Introduction to Marto AI and ATO Platform
00:37 Understanding Evals in Gen AI
01:16 Challenges in AI Testing
02:23 Example Use Case: Technical Support Bot
04:03 Types of Evals
05:31 Implementing Evals in Practice
08:44 Live Experiment and Results
12:18 Conclusion and Next Steps
Taught by
Conf42