Input Feature Pruning for Accelerating GNN Inference on Heterogeneous Platforms

Graph Neural Networks (GNNs) are an emerging class of machine learning models which utilize structured graph information and node features to reduce high-dimensional input data to low-dimensional embeddings, from which predictions can be made. Due to the compounding effect of aggregating neighbor in...

Full description

Saved in:
Bibliographic Details
Published inProceedings - International Conference on High Performance Computing pp. 282 - 291
Main Authors Yik, Jason, Kuppannagari, Sanmukh R., Zeng, Hanqing, Prasanna, Viktor K.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2022
Subjects
Online AccessGet full text
ISSN2640-0316
DOI10.1109/HiPC56025.2022.00045

Cover

More Information
Summary:Graph Neural Networks (GNNs) are an emerging class of machine learning models which utilize structured graph information and node features to reduce high-dimensional input data to low-dimensional embeddings, from which predictions can be made. Due to the compounding effect of aggregating neighbor information, GNN inferences require raw data from many times more nodes than are targeted for prediction. Thus, on heterogeneous compute platforms, inference latency can be largely subject to the inter-device communication cost of transferring input feature data to the GPU/accelerator before computation has even begun. In this paper, we analyze the trade-off effect of pruning input features from GNN models, reducing the volume of raw data that the model works with to lower communication latency at the expense of an expected decrease in the overall model accuracy. We develop greedy and regression-based algorithms to determine which features to retain for optimal prediction accuracy. We evaluate pruned model variants and find that they can reduce inference latency by up to 80% with an accuracy loss of less than 5% compared to non-pruned models. Furthermore, we show that the latency reductions from input feature pruning can be extended under different system variables such as batch size and floating point precision.
ISSN:2640-0316
DOI:10.1109/HiPC56025.2022.00045