Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

University of Melbourne

Getting Robust - Securing Neural Networks Against Adversarial Attacks

University of Melbourne via YouTube

Overview

The course aims to teach learners how to secure neural networks against adversarial attacks in the field of Machine Learning. The course covers topics such as adversarial attacks, defenses, certified robustness, and other methods like differential privacy. The teaching method includes lectures on various domains affected by adversarial attacks, examples of attacks, and strategies for defense. This course is intended for researchers and developers looking to enhance the security of their machine learning models.

Syllabus

Introduction
Meet Andrew
Deep Learning Applications
Adversarial Learning
Deanonymization
Tay
Simon Wecker
What is an adversarial attack
Examples of adversarial attacks
Why adversarial attacks exist
Accuracy
Accuracy Robustness
Adversarial Attacks
Adversarial Defense
Certified Robustness
Differential Privacy
Differential Privacy Equation
Other Methods
Example
Polytope Bounding
Test Time Samples
Training Time Attacks
Conclusion

Taught by

The University of Melbourne

Reviews

Start your review of Getting Robust - Securing Neural Networks Against Adversarial Attacks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.