
FLASH SALE: Ends May 22!
Udemy online courses up to 85% off.
Get Deal
In this Richard M. Karp Distinguished Lecture, Yoshua Bengio from IVADO, Mila, and Université de Montréal discusses the catastrophic risks posed by superintelligent AI agents. Explore how leading AI companies' focus on building generalist AI agents—systems that autonomously plan, act, and pursue goals—creates significant public safety and security risks, from potential misuse to irreversible loss of human control. Learn how current AI training methods contribute to these risks, with evidence showing AI agents can engage in deception or pursue self-preservation goals contrary to human interests. Bengio presents a safer alternative to agency-driven AI development: a non-agentic "Scientist AI" designed to explain the world rather than act in it, featuring a world model that generates theories from data and a question-answering inference machine with explicit uncertainty measures. Discover how this approach could accelerate scientific progress, including in AI safety, while serving as a guardrail against potentially dangerous AI agents. Bengio, a Turing Award recipient and one of the world's leading experts in artificial intelligence and deep learning, makes a compelling case for researchers, developers, and policymakers to pursue this safer path for AI innovation.