Overview
Watch this lecture by Song Mei from UC Berkeley as part of the Simons Institute's Deep Learning Theory series, exploring the statistical foundations of contrastive pre-training and multimodal generative AI. The hour-long talk delves into theoretical frameworks that underpin modern AI systems capable of processing multiple types of data simultaneously, offering insights into how these models learn meaningful representations across different modalities.
Syllabus
A Statistical Theory of Contrastive Pre-training and Multimodal Generative AI
Taught by
Simons Institute