
Overview

FLASH SALE: Ends May 22!
Udemy online courses up to 85% off.
Get Deal
In this 59-minute talk from the Simons Institute, Carnegie Mellon University researcher Aditi Raghunathan explores the critical safety challenges that arise when AI systems encounter out-of-distribution scenarios. Examine how machine learning models can behave unpredictably when faced with inputs that differ from their training data, and understand the implications for AI safety guarantees. Learn about cutting-edge research approaches to creating more robust AI systems that maintain reliable performance even in unfamiliar situations. The presentation is part of the Safety-Guaranteed LLMs series and offers valuable insights for researchers, practitioners, and anyone concerned with the responsible development of artificial intelligence.
Syllabus
Out Of Distribution, Out Of Control? Understanding Safety Challenges In AI
Taught by
Simons Institute