Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Intro to Artificial Intelligence - Temporal Difference Learning - Lecture 19

Dave Churchill via YouTube

Overview

Coursera Plus Monthly Sale: All Certificates & Courses 40% Off!
This lecture from Memorial University's Computer Science 3200/6980 (Winter 2025) covers key reinforcement learning concepts, focusing on Temporal Difference Learning. Learn about on-policy vs off-policy methods, epsilon-soft implementations, SARSA, Q-Learning, and the differences between tabular and deep reinforcement learning through practical examples like the Cliff problem. The second half includes exam preparation guidance and a comprehensive demonstration of Assignment 5, complete with code walkthrough. Taught by Professor David Churchill, this session is part of the Introduction to Artificial Intelligence course that applies algorithmic techniques to game-based problem solving.

Syllabus

00:00 - Intro
01:32 - On Policy vs Off Policy
09:12 - Epsilon-Soft Code / Example
14:08 - Off Policy Methods
16:08 - Temporal Difference Learning
22:46 - Driving Home Example
29:06 - SARSA
33:21 - Q-Learning
36:11 - The Cliff Example
41:48 - Tabular vs 'Deep' RL
42:57 - Exam Questions
43:48 - Assignment 5 Demo
01:02:59 - Assignment 5 Code

Taught by

Dave Churchill

Reviews

Start your review of Intro to Artificial Intelligence - Temporal Difference Learning - Lecture 19

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.