Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Adversarial Examples and Human-ML Alignment

MITCBMM via YouTube

Overview

Limited-Time Offer: Up to 75% Off Coursera Plus!
7000+ certificate courses from Google, Microsoft, IBM, and many more.
The course focuses on understanding adversarial examples and aligning human perception with machine learning models. The learning outcomes include gaining insights into deep networks, interpreting adversarial perturbations, and exploring the consequences on interpretability, training modifications, and robustness tradeoffs. The course teaches skills in analyzing correlations in data, conducting "counterfactual" analysis with robust models, and identifying non-robust features leading to adversarial examples. The teaching method involves theoretical discussions and practical experiments. This course is intended for individuals interested in machine learning, deep learning, and the alignment of human perception with ML models.

Syllabus

Adversarial Examples and Human-ML Alignment Aleksander Madry
Deep Networks: Towards Human Vision?
A Natural View on Adversarial Examples
Why Are Adv. Perturbations Bad?
Human Perspective
ML Perspective
The Robust Features Model
The Simple Experiment: A Second Look
Human vs ML Model Priors
In fact, models...
Consequence: Interpretability
Consequence: Training Modifications
Consequence: Robustness Tradeoffs
Robustness + Perception Alignment
Robustness + Better Representations
Problem: Correlations can be weird
"Counterfactual" Analysis with Robust Models
Adversarial examples arise from non-robust features in the data

Taught by

MITCBMM

Reviews

Start your review of Adversarial Examples and Human-ML Alignment

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.