This lecture explores the concept of goal conditioning and hierarchical planning in robot learning, particularly focusing on training methodologies for foundational models. Discover how to overcome traditional reinforcement learning limitations when dealing with complex, long-trajectory tasks by implementing goal-conditioned policies. Learn about the evolution from one-hot encoded vectors to more scalable continuous goal representations that exist in the same space as the state, enabling a single policy to achieve diverse goals. The presentation connects these concepts to foundational models while examining important questions about optimal goal distributions for training and generalization approaches that enhance model reusability and robustness. Particularly valuable for understanding how goal-conditioned approaches help robots handle task variations and complex sequences like cooking, where numerous smaller, repetitive actions must be coordinated toward a larger objective.
Overview
Syllabus
Robot Learning: Goal-Condition Panning
Taught by
Montreal Robotics