The First Asynchronous SGD with Optimal Time Complexity - Seminar #126
Federated Learning One World Seminar via YouTube
Overview
Watch this 59-minute presentation from the Federated Learning One World Seminar series where Arto Maranjyan from KAUST discusses the first asynchronous Stochastic Gradient Descent (SGD) algorithm with optimal time complexity. Delivered on April 16, 2025, this talk explores groundbreaking developments in asynchronous optimization methods. Learn about the theoretical foundations and practical implications of this advancement in distributed machine learning. For more information, visit the seminar website or connect with the speaker through his personal webpage. The presentation is part of the FLOW Seminar series (#126) focused on federated learning innovations.
Syllabus
FLOW Seminar #126: Arto Maranjyan (KAUST) The First Asynchronous SGD with Optimal Time Complexity
Taught by
Federated Learning One World Seminar