Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Neural Nets for NLP 2017 - Unsupervised Learning of Structure

Graham Neubig via YouTube

Overview

This course covers the following learning outcomes and goals: - Understanding the difference between learning features and learning structure in neural networks for NLP. - Exploring various unsupervised learning methods in the context of natural language processing. - Analyzing design decisions for unsupervised models and their implications. - Examining examples of unsupervised learning in neural networks for NLP. The course teaches the following individual skills or tools: - Unsupervised feature learning and its applications. - Hidden Markov Models with Gaussian emissions. - Featurized Hidden Markov Models using neural networks. - CRF Autoencoders and their role in learning discrete structures. - Gated Convolution, learning with RL, and other advanced techniques in NLP. The teaching method of the course involves a lecture format, with slides available for reference. The course content is presented through examples, explanations, and demonstrations of various unsupervised learning techniques in neural networks for NLP. The intended audience for this course includes students, researchers, or professionals interested in neural networks, natural language processing, and unsupervised learning methods in the field of NLP.

Syllabus

Supervised, Unsupervised, Semi-supervised
Learning Features vs. Learning Discrete Structure
Unsupervised Feature Learning (Review)
How do we Use Learned Features?
What About Discrete Structure?
A Simple First Attempt
Unsupervised Hidden Markov Models • Change label states to unlabeled numbers
Hidden Markov Models w/ Gaussian Emissions • Instead of parameterizing each state with a categorical distribution, we can use a Gaussian (or Gaussian modure)!
Featurized Hidden Markov Models (Tran et al. 2016) • Calculate the transition emission probabilities with neural networks! • Emission: Calculate representation of each word in vocabulary w
CRF Autoencoders (Ammar et al. 2014)
Soft vs. Hard Tree Structure
One Other Paradigm: Weak Supervision
Gated Convolution (Cho et al. 2014)
Learning with RL (Yogatama et al. 2016)
Phrase Structure vs. Dependency Structure
Dependency Model w/ Valence (Klein and Manning 2004)
Unsupervised Dependency Induction w/ Neural Nets (Jiang et al. 2016)
Learning Dependency Heads w/ Attention (Kuncoro et al. 2017)
Learning Segmentations w/ Reconstruction Loss (Elsner and Shain 2017)
Learning Language-level Features (Malaviya et al. 2017) • All previous work learned features of a single sentence

Taught by

Graham Neubig

Reviews

Start your review of Neural Nets for NLP 2017 - Unsupervised Learning of Structure

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.