Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Bayesian Networks 3 - Maximum Likelihood - Stanford CS221: AI (Autumn 2019)

Stanford University via YouTube

Overview

This course teaches learners how to apply Maximum Likelihood estimation in Bayesian Networks. The course covers topics such as where parameters come from, learning tasks, parameter sharing, Naive Bayes, HMMs, Laplace smoothing, Expectation Maximization, and more. The teaching method includes theoretical explanations, examples, and scenarios. This course is intended for individuals interested in advancing their knowledge of Bayesian Networks and AI.

Syllabus

Introduction.
Announcements.
Review: Bayesian network.
Review: probabilistic inference.
Where do parameters come from?.
Roadmap.
Learning task.
Example: one variable.
Example: v-structure.
Example: inverted-v structure.
Parameter sharing.
Example: Naive Bayes.
Example: HMMS.
General case: learning algorithm.
Maximum likelihood.
Scenario 2.
Regularization: Laplace smoothing.
Example: two variables.
Motivation.
Maximum marginal likelihood.
Expectation Maximization (EM).

Taught by

Stanford Online

Reviews

Start your review of Bayesian Networks 3 - Maximum Likelihood - Stanford CS221: AI (Autumn 2019)

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.