Large Language Models in 2025 - How Much Understanding and Intelligence?
Stanford University via YouTube
Overview
In this 39-minute talk, Stanford University professor Christopher Manning examines the state of Large Language Models in 2025, exploring their capabilities, limitations, and implications for artificial intelligence. Delivered at a Stanford Open Virtual Assistant Lab workshop sponsored by the Alfred P. Sloan Foundation and Stanford HAI, Manning addresses key questions about LLMs: their practical applications, effectiveness at understanding and generating human language, and whether they represent true intelligence or the beginning of artificial super intelligence as some claim. As the Thomas M. Siebel Professor of Machine Learning, Professor of Linguistics and Computer Science, and Senior Fellow at Stanford Institute for HAI, Manning provides expert insights on these models that have dominated AI discourse since 2022. The workshop, focused on public AI assistants to worldwide knowledge and implications for the free web, was recorded on February 13, 2025, at Stanford University.
Syllabus
Christopher Manning: Large Language Models in 2025 – How Much Understanding and Intelligence?
Taught by
Stanford HAI