Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Large Language Models and the Neuroscience of Meaning

Wu Tsai Neurosciences Institute, Stanford via YouTube

Overview

Coursera Plus Monthly Sale: All Certificates & Courses 40% Off!
This 43-minute talk by Laura Gwilliams, a faculty scholar at the Wu Tsai Neurosciences Institute and Stanford Data Science Institute, explores the fascinating intersection between large language models and human brain function. Examine how AI chatbots can help us understand our own neural processing of language and meaning extraction, flipping the traditional perspective of comparing human brains to AI. Discover insights from Gwilliams' Laboratory of Speech Neuroscience as she explains the computational architecture of speech comprehension in the human brain and hierarchical dynamic coding that coordinates our understanding of speech. The presentation includes references to related research on neural pathways and brain development, offering a comprehensive look at how neuroscience and artificial intelligence can inform each other. Perfect for those interested in cognitive science, AI development, linguistics, and the future of human-machine understanding.

Syllabus

What ChatGPT understands: Large language models and the neuroscience of meaning | Laura Gwilliams

Taught by

Wu Tsai Neurosciences Institute, Stanford

Reviews

Start your review of Large Language Models and the Neuroscience of Meaning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.