Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

NYU Deep Learning

via YouTube

Overview

This course covers the history and resources of deep learning, gradient descent, backpropagation algorithm, neural networks, PyTorch implementation, recurrent and convolutional nets, ConvNets, natural signals properties, recurrent neural networks, latent variable energy-based models, structured prediction, unsupervised learning, autoencoders, self-supervised learning, variational inference, generative adversarial networks, attention mechanisms, transformers, graph convolutional networks, low resource machine translation, optimization for deep learning, and planning and control. The course teaches various tools and skills such as implementing neural nets, parameter sharing, joint embedding methods, PCA, AE, K-means, Gaussian mixture model, sparse coding, VAE, differentiable associative memories, GANs, self-supervised learning in computer vision, speech recognition, and Graph Transformer Networks. The teaching method includes lectures, practical implementations with PyTorch, and a final project. This course is intended for individuals interested in deep learning, neural networks, and their applications in various fields such as computer vision, natural language processing, and machine translation.

Syllabus

01 – History and resources.
01L – Gradient descent and the backpropagation algorithm.
02 – Neural nets: rotation and squashing.
02L – Modules and architectures.
03 – Tools, classification with neural nets, PyTorch implementation.
03L – Parameter sharing: recurrent and convolutional nets.
04L – ConvNet in practice.
04.1 – Natural signals properties and the convolution.
04.2 – Recurrent neural networks, vanilla and gated (LSTM).
05L – Joint embedding method and latent variable energy based models (LV-EBMs).
05.1 – Latent Variable Energy Based Models (LV-EBMs), inference.
05.2 – But what are these EBMs used for?.
06L – Latent variable EBMs for structured prediction.
06 – Latent Variable Energy Based Models (LV-EBMs), training.
07L – PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE.
07 – Unsupervised learning: autoencoding the targets.
08L – Self-supervised learning and variational inference.
08 – From LV-EBM to target prop to (vanilla, denoising, contractive, variational) autoencoder.
09L – Differentiable associative memories, attention, and transformers.
09 – AE, DAE, and VAE with PyTorch; generative adversarial networks (GAN) and code.
10L – Self-supervised learning in computer vision.
10 – Self / cross, hard / soft attention and the Transformer.
11L – Speech recognition and Graph Transformer Networks.
11 – Graph Convolutional Networks (GCNs).
12L – Low resource machine translation.
12 – Planning and control.
13L – Optimisation for Deep Learning.
13 – The Truck Backer-Upper.
14L – Lagrangian backpropagation, final project winners, and Q&A session.
14 – Prediction and Planning Under Uncertainty.
AI2S Xmas Seminar - Dr. Alfredo Canziani (NYU) - Energy-Based Self-Supervised Learning.

Taught by

Alfredo Canziani

Reviews

Start your review of NYU Deep Learning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.