
Udemy Special: Ends May 28!
Learn Data Science. Courses starting at $12.99.
Get Deal
Explore a 40-minute lecture from Harvard University that investigates why large language models (LLMs) improve performance when generating additional "reasoning tokens" or longer chain-of-thought (CoT) during inference time. Discover which aspects of task complexity most strongly determine the optimal amount of reasoning needed. The presentation covers the research "Critical Thinking: Which Kinds of Complexity Govern Optimal Reasoning Length?" by Celine Lee and Alexander M. Rush from Cornell University and Keyon Vafa from Harvard University, using the metaphor of a Möbius strip to explain nonlinear AI reasoning processes. Learn about Test-Time compute Scaling and how different complexities affect AI reasoning capabilities through this insightful academic exploration of advanced AI concepts.