Generative AI with Large Language Models
DeepLearning.AI and Amazon Web Services via Coursera
-
392
-
- Write review
Overview
In Generative AI with Large Language Models (LLMs), you’ll learn the fundamentals of how generative AI works, and how to deploy it in real-world applications.
By taking this course, you'll learn to:
- Deeply understand generative AI, describing the key steps in a typical LLM-based generative AI lifecycle, from data gathering and model selection, to performance evaluation and deployment
- Describe in detail the transformer architecture that powers LLMs, how they’re trained, and how fine-tuning enables LLMs to be adapted to a variety of specific use cases
- Use empirical scaling laws to optimize the model's objective function across dataset size, compute budget, and inference requirements
- Apply state-of-the art training, tuning, inference, tools, and deployment methods to maximize the performance of models within the specific constraints of your project
- Discuss the challenges and opportunities that generative AI creates for businesses after hearing stories from industry researchers and practitioners
Developers who have a good foundational understanding of how LLMs work, as well the best practices behind training and deploying them, will be able to make good decisions for their companies and more quickly build working prototypes. This course will support learners in building practical intuition about how to best utilize this exciting new technology.
This is an intermediate course, so you should have some experience coding in Python to get the most out of it. You should also be familiar with the basics of machine learning, such as supervised and unsupervised learning, loss functions, and splitting data into training, validation, and test sets. If you have taken the Machine Learning Specialization or Deep Learning Specialization from DeepLearning.AI, you’ll be ready to take this course and dive deeper into the fundamentals of generative AI.
Syllabus
- Week 1
- Generative AI use cases, project lifecycle, and model pre-training
- Week 2
- Fine-tuning and evaluating large language models
- Week 3
- Reinforcement learning and LLM-powered applications
Taught by
Antje Barth, Chris Fregly, Shelbee Eigenbrode and Mike Chambers