Learn how and why machine learning and artificial intelligence technology fails and understand ways to make these systems more secure and resilient.
Overview
Syllabus
Introduction
- Machine learning security concerns
- What you should know
- How systems can fail and how to protect them
- Why does ML security matter
- Attacks vs. unintentional failure modes
- Security goals for ML: CIA
- Perturbation attacks and AUPs
- Poisoning attacks
- Reprogramming neural nets
- Physical domain (3D adversarial objects)
- Supply chain attacks
- Model inversion
- System manipulation
- Membership inference and model stealing
- Backdoors and existing exploits
- Reward hacking
- Side effects in reinforcement learning
- Distributional shifts and incomplete testing
- Overfitting/underfitting
- Data bias considerations
- Effective techniques for building resilience in ML
- ML dataset hygiene
- ML adversarial training
- ML access control to APIs
- Next steps
Taught by
Diana Kelley