Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

The Future of Natural Language Processing

HuggingFace via YouTube

Overview

This course explores transfer learning in Natural Language Processing (NLP) focusing on open questions, current trends, limits, and future directions. Participants will walk through interesting papers and research directions on model size, computational efficiency, out-of-domain generalization, model evaluation, fine-tuning, sample efficiency, common sense, and inductive biases. The teaching method includes a presentation of slides and a discussion of key concepts. This course is intended for individuals interested in the latest advancements and future directions in NLP.

Syllabus

Intro
Open questions, current trends, limits
Model size and Computational efficiency
Using more and more data
Pretraining on more data
Fine-tuning on more data
More data or better models
In-domain vs. out-of-domain generalization
The limits of NLU and the rise of NLG
Solutions to the lack of robustness
Reporting and evaluation issues
The inductive bias question
The common sense question

Taught by

Hugging Face

Reviews

Start your review of The Future of Natural Language Processing

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.