Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

CMU Neural Nets for NLP: Model Interpretation

Graham Neubig via YouTube

Overview

Limited-Time Offer: Up to 75% Off Coursera Plus!
7000+ certificate courses from Google, Microsoft, IBM, and many more.
The course covers model interpretation in the context of Neural Networks for Natural Language Processing. The learning outcomes include understanding the importance of interpretability, exploring different explanation techniques such as gradient-based importance scores and extractive rationale generation. The course teaches skills in probing sentence embeddings for linguistic properties and evaluating model interpretations. The teaching method is through a lecture format. The intended audience for this course is individuals interested in neural networks, natural language processing, and model interpretation.

Syllabus

Intro
Why interpretability?
What is interpretability?
Two broad themes
Source Syntax in NMT
Why neural translations are the right length?
Fine grained analysis of sentence embeddings
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Issues with probing
Minimum Description Length (MDL) Probes
How to evaluate?
Explanation Techniques: gradient based importance scores
Explanation Technique: Extractive Rationale Generation

Taught by

Graham Neubig

Reviews

Start your review of CMU Neural Nets for NLP: Model Interpretation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.