Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Setting Up a Local LLM to Avoid Paying for ChatGPT

Python Lessons via YouTube

Overview

Coursera Plus Monthly Sale: All Certificates & Courses 40% Off!
This 19-minute tutorial demonstrates how to set up and run private GPT models locally on your own computer without relying on paid services. Follow a comprehensive walkthrough covering Ollama installation, model selection (including Deepseek, Gemma, and QwQ), and implementation on GPU or CPU with Docker support. Learn to optimize your system for local LLMs, create a secure offline AI assistant, manage different models while ensuring data privacy, and troubleshoot common issues across Windows, Linux, and Mac. The step-by-step process includes installing Ollama, downloading appropriate models (starting with smaller ones to test compatibility), setting up Docker, and implementing OpenWebUI for a user-friendly interface. Perfect for engineers, developers, and AI enthusiasts looking to enhance productivity while maintaining data security through locally-hosted large language models.

Syllabus

0:00 – Introduction & Overview
0:44 – Ollama overview
2:31 – Installing and Running Ollama
4:20 – Install LLM with Ollama
6:50 – Chatting with local LLM model
7:52 – OpenWebUI overview
8:45 – Installing and running OpenWebUI with Docker
10:55 – Play around with Ollama and OpenWebUI
14:05 – Chatting with Gemma 3 model
17:45 – Concluding thoughts

Taught by

Python Lessons

Reviews

Start your review of Setting Up a Local LLM to Avoid Paying for ChatGPT

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.