Large Language Models and the Neuroscience of Meaning
Wu Tsai Neurosciences Institute, Stanford via YouTube
Overview
This 43-minute talk by Laura Gwilliams, a faculty scholar at the Wu Tsai Neurosciences Institute and Stanford Data Science Institute, explores the fascinating intersection between large language models and human brain function. Examine how AI chatbots can help us understand our own neural processing of language and meaning extraction, flipping the traditional perspective of comparing human brains to AI. Discover insights from Gwilliams' Laboratory of Speech Neuroscience as she explains the computational architecture of speech comprehension in the human brain and hierarchical dynamic coding that coordinates our understanding of speech. The presentation includes references to related research on neural pathways and brain development, offering a comprehensive look at how neuroscience and artificial intelligence can inform each other. Perfect for those interested in cognitive science, AI development, linguistics, and the future of human-machine understanding.
Syllabus
What ChatGPT understands: Large language models and the neuroscience of meaning | Laura Gwilliams
Taught by
Wu Tsai Neurosciences Institute, Stanford