![](https://ccweb.imgix.net/https%3A%2F%2Fwww.classcentral.com%2Fimages%2Ficon-black-friday.png?auto=format&ixlib=php-4.1.0&s=fe56b83c82babb2f8fce47a2aed2f85d)
Overview
![](https://ccweb.imgix.net/https%3A%2F%2Fwww.classcentral.com%2Fimages%2Ficon-black-friday.png?auto=format&ixlib=php-4.1.0&s=fe56b83c82babb2f8fce47a2aed2f85d)
This course delves into fine-tuning sentence transformers using the Natural Language Inference (NLI) training approach with softmax loss. The goal is to train models for generating sentence embeddings, which can be applied in semantic textual similarity, clustering, and information retrieval tasks. The course covers the training process, preprocessing NLI data, utilizing PyTorch, working with Sentence-Transformers, and analyzing results. The intended audience for this course includes individuals interested in NLP, machine learning, and enhancing their understanding of sentence embeddings.
Syllabus
Intro
NLI Fine-tuning
Softmax Loss Training Overview
Preprocessing NLI Data
PyTorch Process
Using Sentence-Transformers
Results
Outro
Taught by
James Briggs