Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

LinkedIn Learning

Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure Modes

via LinkedIn Learning

Overview

Learn how and why machine learning and artificial intelligence technology fails and understand ways to make these systems more secure and resilient.

Syllabus

Introduction
  • Machine learning security concerns
  • What you should know
1. Machine Learning Foundations
  • How systems can fail and how to protect them
  • Why does ML security matter
  • Attacks vs. unintentional failure modes
  • Security goals for ML: CIA
2. Intentional Failure Modes/Attacks
  • Perturbation attacks and AUPs
  • Poisoning attacks
  • Reprogramming neural nets
  • Physical domain (3D adversarial objects)
  • Supply chain attacks
  • Model inversion
  • System manipulation
  • Membership inference and model stealing
  • Backdoors and existing exploits
3. Unintentional Failure Modes/Intrinsic Design Flaws
  • Reward hacking
  • Side effects in reinforcement learning
  • Distributional shifts and incomplete testing
  • Overfitting/underfitting
  • Data bias considerations
4. Building Resilient ML
  • Effective techniques for building resilience in ML
  • ML dataset hygiene
  • ML adversarial training
  • ML access control to APIs
Conclusion
  • Next steps

Taught by

Diana Kelley

Reviews

4.8 rating at LinkedIn Learning based on 19 ratings

Start your review of Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure Modes

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.