Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Neural Nets for NLP - Models with Latent Random Variables

Graham Neubig via YouTube

Overview

Limited-Time Offer: Up to 75% Off Coursera Plus!
7000+ certificate courses from Google, Microsoft, IBM, and many more.
This course covers the following learning outcomes and goals: - Understanding the differences between generative vs. discriminative models and deterministic vs. random variables. - Exploring Variational Autoencoders and their applications in Natural Language Processing (NLP). - Learning how to handle discrete latent variables in neural networks. - Gaining insights into examples of Variational Autoencoders in NLP. The course teaches the following individual skills or tools: - Deep Structured Latent Variable Models - Variational Inference techniques - Re-parameterization Trick for handling sampling issues - Training difficulties and solutions in Variational Autoencoders The teaching method of the course includes lectures, quizzes, practical exercises, and examples to illustrate concepts. The intended audience for this course is individuals interested in Neural Networks for NLP, particularly those looking to delve into advanced topics such as Variational Autoencoders and handling latent variables in neural network models.

Syllabus

Intro
Discriminative vs. Generative Models
Quiz: What Types of Variables?
What is Latent Random Variable Model
Why Latent Variable Models?
Deep Structured Latent Variable Models • Specify structure, but interpretable structure is often discrete e.g. POS tags, dependency parse trees
Examples of Deep Latent Variable Models
A probabilistic perspective on Variational Auto-Encoder
What is Our Loss Function?
Practice
Variational Inference • Variational inference approximates the true posterior poll with a family of distributions
Variational Inference • Variational inference approximates the true posterior polar with a family of distributions
Variational Auto-Encoders
Variational Autoencoders
Learning VAE
Problem! Sampling Breaks Backprop
Solution: Re-parameterization Trick
Difficulties in Training . Of the two components in the VAE objective, the KL divergence term is much easier to learn
Solution 3
Weaken the Decoder
Discrete Latent Variables?
Method 1: Enumeration
Solution 4

Taught by

Graham Neubig

Reviews

Start your review of Neural Nets for NLP - Models with Latent Random Variables

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.