Probabilistic Graphical Models
Stanford University via Coursera Specialization
-
33
-
- Write review
Overview
Class Central Tips
Syllabus
- Offered by Stanford University. Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over ... Enroll for free.
Course 2: Probabilistic Graphical Models 2: Inference
- Offered by Stanford University. Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over ... Enroll for free.
Course 3: Probabilistic Graphical Models 3: Learning
- Offered by Stanford University. Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over ... Enroll for free.
Courses
-
5 weeks long, 66 hours worth of material
View detailsProbabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.
This course is the first in a sequence of three. It describes the two basic PGM representations: Bayesian Networks, which rely on a directed graph; and Markov networks, which use an undirected graph. The course discusses both the theoretical properties of these representations as well as their use in practice. The (highly recommended) honors track contains several hands-on assignments on how to represent some real-world problems. The course also presents some important extensions beyond the basic PGM representation, which allow more complex models to be encoded compactly. -
5 weeks long, 38 hours worth of material
View detailsProbabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.
This course is the second in a sequence of three. Following the first course, which focused on representation, this course addresses the question of probabilistic inference: how a PGM can be used to answer questions. Even though a PGM generally describes a very high dimensional distribution, its structure is designed so as to allow questions to be answered efficiently. The course presents both exact and approximate algorithms for different types of inference tasks, and discusses where each could best be applied. The (highly recommended) honors track contains two hands-on programming assignments, in which key routines of the most commonly used exact and approximate algorithms are implemented and applied to a real-world problem. -
5 weeks long, 66 hours worth of material
View detailsProbabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.
This course is the third in a sequence of three. Following the first course, which focused on representation, and the second, which focused on inference, this course addresses the question of learning: how a PGM can be learned from a data set of examples. The course discusses the key problems of parameter estimation in both directed and undirected models, as well as the structure learning task for directed models. The (highly recommended) honors track contains two hands-on programming assignments, in which key routines of two commonly used learning algorithms are implemented and applied to a real-world problem.
Taught by
Daphne Koller
Related Courses
-
Probabilistic Graphical Models 1: Representation
Stanford University
4.1 -
Probabilistic Graphical Models 3: Learning
Stanford University
-
Probabilistic Graphical Models 2: Inference
Stanford University
4.3 -
Advanced Statistics for Data Science
Johns Hopkins University
-
Computational Probability and Inference
Massachusetts Institute of Technology
4.7 -
Statistics with R
Duke University
Reviews
0.0 rating, based on 0 reviews