Overview
Learn how to build local AI agents using Python in this 28-minute tutorial. Discover the process of creating a fully local and free-to-run AI system using Ollama, LangChain, and ChromaDB as a vector search database. Follow along with a complete project demonstration, Python setup and installation instructions, Ollama configuration, and implementation of local Large Language Models. The tutorial covers setting up a vector store database and connecting it with your LLM to create a functional Retrieval-Augmented Generation (RAG) system. All code is available on GitHub, with additional resources provided for Ollama library, download links, and related educational content on virtual environments and Ollama usage.
Syllabus
00:00 | Video Overview
00:34 | Project Demo
02:02 | Python Setup/Installation
04:33 | Ollama Setup
07:14 | GitHub Copilot
08:22 | Local LLM Usage
14:52 | Vector Store Database Setup
23:24 | Connecting LLM & Vector Store
Taught by
Tech With Tim