AdaGL: Adaptive Learning for Agile Distributed Training of Gigantic GNNs
Distributed GNN training on contemporary massive and densely connected graphs requires information aggregation from all neighboring nodes, which leads to an explosion of inter-server communications. This paper proposes AdaGL, a highly scalable end-to-end framework for rapid distributed GNN training....
Saved in:
| Published in | 2023 60th ACM/IEEE Design Automation Conference (DAC) pp. 1 - 6 |
|---|---|
| Main Authors | , , , , |
| Format | Conference Proceeding |
| Language | English |
| Published |
IEEE
09.07.2023
|
| Subjects | |
| Online Access | Get full text |
| DOI | 10.1109/DAC56929.2023.10248003 |
Cover
| Summary: | Distributed GNN training on contemporary massive and densely connected graphs requires information aggregation from all neighboring nodes, which leads to an explosion of inter-server communications. This paper proposes AdaGL, a highly scalable end-to-end framework for rapid distributed GNN training. AdaGL novelty lies upon our adaptive-learning based graph-allocation engine as well as utilizing multi-resolution coarse representation of dense graphs. As a result, AdaGL achieves an unprecedented level of balanced server computation while minimizing the communication overhead. Extensive proof-of-concept evaluations on billion-scale graphs show AdaGL attains ∼30−40% faster convergence compared with prior arts. |
|---|---|
| DOI: | 10.1109/DAC56929.2023.10248003 |