Overview
Watch a comprehensive lecture from UC Berkeley EECS where researcher Eric Wallace delves into the complex topic of memorization in language models. Explore how large language models store and utilize information, with insights drawn from Wallace's PhD research work. Learn about the mechanisms behind model memorization, its implications for AI development, and current understanding of how these systems retain and access stored information. Gain valuable technical knowledge about the inner workings of language models through this academic presentation, which is part of Berkeley's course on understanding LLMs. Note that all research discussed reflects Wallace's doctoral work prior to his role at OpenAI.
Syllabus
Eric Wallace: Memorization in language models
Taught by
UC Berkeley EECS