Overview
Learn about hardware-software co-design methodologies and tools for AI systems in this technical presentation from Georgia Tech and NVIDIA experts. Explore how emerging AI models like LLMs used in Generative AI have intensified demands on compute FLOPS, memory capacity, and network bandwidth due to their massive parameter counts and low data reuse characteristics. Discover approaches for optimizing future AI system designs through strategic software-hardware co-development to better handle the computational challenges of large language models and generative AI applications.
Syllabus
Modeling Methodology and tools for HW/SW Codesign
Taught by
Open Compute Project