Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Understanding Deep Neural Networks - From Generalization to Interpretability

Institute for Advanced Study via YouTube

Overview

This seminar on theoretical machine learning aims to provide an understanding of deep neural networks, focusing on generalization and interpretability. The course covers topics such as the impact of deep learning on mathematical problems, graph convolutional neural networks, spectral graph convolution, relevance mapping problem, and the rate-distortion viewpoint. The teaching method includes theoretical explanations, numerical results, and an MNIST experiment. This course is intended for individuals interested in delving deeper into the theoretical aspects of machine learning and neural networks.

Syllabus

Intro
The Dawn of Deep Learning
Impact of Deep Learning on Mathematical Problems
Numerical Results
Graph Convolutional Neural Networks Graph convolutional neural networks
Two Approaches to Convolution on Graphs
Spectral Graph Convolution
Spectral Filtering using Functional Calculus
Graphs Modeling the Same Phenomenon
Comparing the Repercussion of a Filter on Two Graphs
Transferability of Functional Calculus Filters
Rethinking Transferability
Fundamental Questions concerning Deep Neural Networks
General Problem Setting
What is Relevance?
The Relevance Mapping Problem
Rate-Distortion Viewpoint
Problem Relaxation
Observations
MNIST Experiment

Taught by

Institute for Advanced Study

Reviews

Start your review of Understanding Deep Neural Networks - From Generalization to Interpretability

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.