Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Benign Overfitting - Peter Bartlett, UC Berkeley

Alan Turing Institute via YouTube

Overview

This course aims to explore the concept of benign overfitting at the intersection of statistics and computer science in machine learning. The learning outcomes include understanding algorithmic paradigms like gradient descent methods, generalization guarantees, and implicit regularization strategies. The course teaches skills such as identifying statistical regularities in data, modeling noisy measurements, and developing high-dimensional statistical models. The teaching method involves a two-day conference featuring talks by leading international researchers. The intended audience includes faculty, postdoctoral researchers, and Ph.D. students from the UK/EU looking to delve into this research area and its applications.

Syllabus

Intro
Overfitting in Deep Networks
Statistical Wisdom and Overhitting
Progress on Overfitting Prediction Rules
Outline
Definitions
From regularization to overfitting
Interpolating Linear Regression
Benign Overfitting: A Characterization
Notions of Effective Rank
Benign Overfitting: Proof Ideas
What kinds of eigenvalues?
Extensions
Implications for deep learning
Implications for adversarial examples
Benign averfitting: Future directions
Benign Overfitting in Linear Regression

Taught by

Alan Turing Institute

Reviews

Start your review of Benign Overfitting - Peter Bartlett, UC Berkeley

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.