This conference talk from FAST '25 presents LeapGNN, a novel feature-centric framework for accelerating distributed graph neural network (GNN) training. Learn how researchers from Zhejiang University and Washington State University Vancouver address the communication bottleneck in traditional model-centric GNN frameworks by reversing the paradigm—bringing GNN models to vertex features instead of transferring massive graph vertex features to models. Discover the three key innovations: a micrograph-based training strategy with model migration to minimize remote feature retrieval, a feature pre-gathering approach that eliminates redundant feature transmissions, and a micrograph-based merging method that reduces kernel switches and synchronization overhead. The presentation showcases experimental results demonstrating performance speedups of up to 4.2× compared to state-of-the-art methods like P3, making this a significant advancement for processing large graphs efficiently.
Overview
Syllabus
FAST '25 - LeapGNN: Accelerating Distributed GNN Training Leveraging Feature-Centric Model Migration
Taught by
USENIX