Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Duke University

Beginning Llamafile for Local Large Language Models (LLMs)

Duke University via Coursera

Overview

Learners will gain the skills to serve powerful language models as practical and scalable web APIs. They will learn how to use the llama.cpp example server to expose a large language model through a set of REST API endpoints for tasks like text generation, tokenization, and embedding extraction. The course dives into the technical details of running the llama.cpp server, configuring various options to customize model behavior, and efficiently handling requests. Learners will understand how to interact with the API using tools like curl and Python, allowing them to integrate language model capabilities into their own applications. Throughout the course, hands-on exercises and code examples reinforce the concepts and provide learners with practical experience in setting up and using the llama.cpp server. By the end, participants will be equipped to deploy robust language model APIs for a variety of natural language processing tasks. The course stands out by focusing on the practical aspects of serving large language models in production environments using the efficient and flexible llama.cpp framework. It empowers learners to harness the power of state-of-the-art NLP models in their projects through a convenient and performant API interface.

Syllabus

  • Getting Started with Mozilla Llamafile
    • This week, you run language models locally. Keep data private. Avoid latency and fees. Use Mixtral model and llamafile.

Taught by

Noah Gift and Alfredo Deza

Reviews

Start your review of Beginning Llamafile for Local Large Language Models (LLMs)

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.