This video explains a simple, free solution to address the problem of distilled reasoning LLMs overfitting and underperforming. Learn about a new research preprint that analyzes why distilled versions of reasoning models struggle and presents a detailed explanation of how in-context learning enhances reasoning capabilities while reducing overthinking. The 29-minute presentation covers findings from researchers at the AI Safety of Chinese Academy of Sciences and other institutions, demonstrating how to improve reasoning performance without additional cost. Discover practical implementation strategies based on the paper "Innate Reasoning is Not Enough: In-Context Learning Enhances Reasoning Large Language Models with Less Overthinking."
Overview
Syllabus
Reasoning LLM ($$$$) Overthink | Easy Solution
Taught by
Discover AI