Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Large Language Models in 2025 - How Much Understanding and Intelligence?

Stanford University via YouTube

Overview

Coursera Plus Annual Sale: All Certificates & Courses 25% Off!
In this 39-minute talk, Stanford University professor Christopher Manning examines the state of Large Language Models in 2025, exploring their capabilities, limitations, and implications for artificial intelligence. Delivered at a Stanford Open Virtual Assistant Lab workshop sponsored by the Alfred P. Sloan Foundation and Stanford HAI, Manning addresses key questions about LLMs: their practical applications, effectiveness at understanding and generating human language, and whether they represent true intelligence or the beginning of artificial super intelligence as some claim. As the Thomas M. Siebel Professor of Machine Learning, Professor of Linguistics and Computer Science, and Senior Fellow at Stanford Institute for HAI, Manning provides expert insights on these models that have dominated AI discourse since 2022. The workshop, focused on public AI assistants to worldwide knowledge and implications for the free web, was recorded on February 13, 2025, at Stanford University.

Syllabus

Christopher Manning: Large Language Models in 2025 – How Much Understanding and Intelligence?

Taught by

Stanford HAI

Reviews

Start your review of Large Language Models in 2025 - How Much Understanding and Intelligence?

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.