In this 3-day developer class on working with Large Language Models, we will show you how to use Transformers in Natural Language Processing and leverage the capabilities available on Huggingface.
You'll learn how transformer models work and their limitations, as well as how to fine-tune a pre-trained model using the Trainer API or Keras. We'll cover sharing models and tokenizers on the Hugging Face Hub and how to create your own dataset and perform a semantic search with FAISS using the Datasets library.
Join us for an interactive 3-day journey into the world of Large Language Models with Huggingface, and take your Natural Language Processing projects to the next level.
Prerequisites: To take this 3-day course, you should have taken our AI Workbench class or have basic knowledge of programming concepts and syntax in a language such as Python or JavaScript. General familiarity with APIs is also recommended.
COURSE OUTLINE
TRANSFORMER MODELS:
- Natural Language Processing
- Transformers, what can they do?
- How do Transformers work?
- Encoder models
- Decoder models
- Sequence-to-sequence models
- Bias and limitations
USING TRANSFORMERS:
- Behind the pipeline
- Models
- Tokenizers
- Handling multiple sequences
- Putting it all together
FINE-TUNING A PRE-TRAINED MODEL:
- Processing the data
- Fine-tuning a model with the Trainer API or Keras
- A full training
- Fine-tuning, Check!
SHARING MODELS AND TOKENIZERS:
- The Hugging Face Hub
- Using pre-trained models
- Sharing pre-trained models
- Building a model card
- Part 1 completed!
- End-of-chapter quiz
THE DATASETS LIBRARY:
- What if my dataset isn't on the Hub?
- Time to slice and dice
- Big data? Datasets to the rescue!
- Creating your own dataset
- Semantic search with FAISS
- Datasets