This talk by Ethan Perez from Anthropic explores the concept of controlling untrusted AI systems through monitoring mechanisms. During the hour-long presentation, Perez discusses approaches for implementing safety guarantees for Large Language Models (LLMs) by using monitoring systems that can detect and prevent potentially harmful outputs or behaviors. Learn about cutting-edge techniques for maintaining control over increasingly powerful AI systems, even when the underlying models themselves cannot be fully trusted or verified for safety compliance.
Overview
Syllabus
Controlling Untrusted AIs With Monitors
Taught by
Simons Institute