A lecture by Basil Saeed from Stanford University exploring the mathematical foundations of high-dimensional empirical risk minimization. Dive into a general model where data points are d-dimensional isotropic Gaussian vectors, with models parametrized by matrices in R^(d×k), and losses dependent on data projections. Understand how the Kac-Rice formula from Gaussian process theory can be applied to derive bounds on the expected number of local minima under proportional asymptotics. Learn how this approach enables sharp characterizations of estimation and prediction errors, particularly for convex losses where high-dimensional asymptotics weren't rigorously established for k≥2. The lecture also covers the spectrum of the Hessian at minimizers, based on joint work with Kiana Asgari and Andrea Montanari, as part of the Deep Learning Theory program at the Simons Institute.
Overview
Syllabus
Local minima of the empirical risk in high dimension
Taught by
Simons Institute