Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

The Practical Divide between Adversarial ML Research and Security Practice - A Red Team Perspective

USENIX Enigma Conference via YouTube

Overview

This course aims to highlight the practical divide between adversarial machine learning (ML) research and security practices, focusing on the perspective of a Red Team. The learning outcomes include understanding the gaps between academic advancements in adversarial ML and industry needs, recognizing the importance of security considerations for ML models, and learning about tools and techniques for securing ML systems. The course teaches skills such as conducting Red Team engagements, implementing security measures like access control, and monitoring the health of ML systems. The teaching method involves reviewing real-world examples and lessons learned from a Machine Learning Red Team engagement at Microsoft. This course is intended for security professionals, ML researchers, and individuals interested in understanding the intersection of ML and cybersecurity.

Syllabus

Introduction
A fundamental paradigm mismatch
The state of ML security
Red teaming
Example
Red Team Attack
Lessons Learned
Health Monitoring
Data
Conclusion

Taught by

USENIX Enigma Conference

Reviews

Start your review of The Practical Divide between Adversarial ML Research and Security Practice - A Red Team Perspective

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.