Adversarial Examples and Human-ML Alignment

Adversarial Examples and Human-ML Alignment

MITCBMM via YouTube Direct link

Adversarial Examples and Human-ML Alignment Aleksander Madry

1 of 18

1 of 18

Adversarial Examples and Human-ML Alignment Aleksander Madry

Class Central Classrooms beta

YouTube playlists curated by Class Central.

Classroom Contents

Adversarial Examples and Human-ML Alignment

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Adversarial Examples and Human-ML Alignment Aleksander Madry
  2. 2 Deep Networks: Towards Human Vision?
  3. 3 A Natural View on Adversarial Examples
  4. 4 Why Are Adv. Perturbations Bad?
  5. 5 Human Perspective
  6. 6 ML Perspective
  7. 7 The Robust Features Model
  8. 8 The Simple Experiment: A Second Look
  9. 9 Human vs ML Model Priors
  10. 10 In fact, models...
  11. 11 Consequence: Interpretability
  12. 12 Consequence: Training Modifications
  13. 13 Consequence: Robustness Tradeoffs
  14. 14 Robustness + Perception Alignment
  15. 15 Robustness + Better Representations
  16. 16 Problem: Correlations can be weird
  17. 17 "Counterfactual" Analysis with Robust Models
  18. 18 Adversarial examples arise from non-robust features in the data

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.