Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fine-tuning LLMs with Huggingface and PyTorch - A Step-by-Step Tutorial

Neural Breakdown with AVB via YouTube

Overview

Learn to fine-tune Large Language Models (LLMs) locally through a comprehensive step-by-step tutorial that demonstrates using Meta's Llama-3.2-1B-Instruct model with Huggingface Transformers and PyTorch. Master essential concepts including prompting techniques, dataset creation, input-output pair generation, loss functions, PyTorch optimizers, and PEFT LORA adapters while working towards paper category prediction. Explore fundamental components like tokenizers, instruction prompts, chat templates, and next word prediction, culminating in both complete fine-tuning with PyTorch and LORA fine-tuning with PEFT. Designed for MacBook Pro M2 users with 16GB RAM, the tutorial provides practical insights for local machine learning, though notes on hardware limitations and quantization considerations for NVIDIA GPUs are included. Access accompanying code, notebooks, datasets, slides, animations, and detailed write-ups through the creator's Patreon platform.

Syllabus

- Intro
- Huggingface Transformers Basics
- Tokenizers
- Instruction Prompts and Chat Templates
- Dataset creation
- Next word prediction
- Loss functions on sequences
- Complete finetuning with Pytorch
- LORA Finetuning with PEFT
- Results

Taught by

Neural Breakdown with AVB

Reviews

Start your review of Fine-tuning LLMs with Huggingface and PyTorch - A Step-by-Step Tutorial

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.