Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Practical AI Red Teaming - A Facial Recognition Case Study

Hack In The Box Security Conference via YouTube

Overview

This course focuses on Practical AI Red Teaming in the context of a Facial Recognition Case Study. The learning outcomes include understanding adversarial attacks on AI algorithms, creating an attack taxonomy, evaluating approaches to attacking facial recognition systems, and conducting real-world research on attacking facial recognition engines. The course teaches skills such as creating attack strategies, identifying vulnerabilities in facial recognition software and hardware, and implementing defenses against AI attacks. The teaching method involves a combination of theoretical concepts, practical case studies, and real-world examples. The intended audience for this course includes cybersecurity professionals, AI researchers, ethical hackers, and individuals interested in AI security and facial recognition technology.

Syllabus

Intro
Alex Polyakov
Adverse AI
Agenda
Why Secure AI
Confidentiality Integrity Availability
AI Applications
Who is affected
History of AI attacks
Top 10 AI attacks
Real applications
Real attacks
AI Red Teaming
Report
Air teaming
Problem
Attack Goal
Attack Form
Attack Actor
Attack Conditions
Attack Methods
Success Criteria
Results
Home Task
Digital Attack
Physical Facial Recognition
Goals
Existing research
Why test in the real environment
Device features
Approaches
Tricks
Example
Result
Defenses
The biggest problem
Highlevel approaches
Secure AI lifecycle
Next steps
Conclusion

Taught by

Hack In The Box Security Conference

Reviews

Start your review of Practical AI Red Teaming - A Facial Recognition Case Study

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.