Exact acceleration of subgraph graph neural networks by eliminating computation redundancy

Qian TAO , Xiyuan WANG , Muhan ZHANG , Shuxian HU , Wenyuan YU , Jingren ZHOU

Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (12) : 2012626

PDF (7102KB)
Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (12) :2012626 DOI: 10.1007/s11704-025-50471-4
Information Security
RESEARCH ARTICLE

Exact acceleration of subgraph graph neural networks by eliminating computation redundancy

Author information +
History +
PDF (7102KB)

Abstract

Graph neural networks (GNNs) have become a prevalent framework for graph tasks. Many recent studies have proposed the use of graph convolution methods over the numerous subgraphs of each graph, known as subgraph graph neural networks. Despite their impressive performance, subgraph GNNs face challenges of both storage and computational inefficiencies due to the vast number and large size of subgraphs. In response to this problem, this paper introduces Ego-Nets-Fit-All (ENFA), a model that uniformly takes the small ego nets as subgraphs, thereby providing greater storage and computational efficiency, while at the same time guarantees identical outputs to the original subgraph GNNs. Experiments reveal that ENFA can reduce storage space by 29.0% to 84.5% and improve training efficiency by up to 1.66 ×.

Graphical abstract

Keywords

graph learning / graph neural network / subgraph GNN / model training acceleration

Cite this article

Download citation ▾
Qian TAO, Xiyuan WANG, Muhan ZHANG, Shuxian HU, Wenyuan YU, Jingren ZHOU. Exact acceleration of subgraph graph neural networks by eliminating computation redundancy. Front. Comput. Sci., 2026, 20(12): 2012626 DOI:10.1007/s11704-025-50471-4

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Wu Z, Pan S, Chen F, Long G, Zhang C, Yu P S . A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32( 1): 4–24

[2]

Zhou J, Cui G, Hu S, Zhang Z, Yang C, Liu Z, Wang L, Li C, Sun M . Graph neural networks: a review of methods and applications. AI Open, 2020, 1: 57–81

[3]

Zhang M, Cui Z, Neumann M, Chen Y. An end-to-end deep learning architecture for graph classification. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. 2018

[4]

Xu K, Hu W, Leskovec J, Jegelka S. How powerful are graph neural networks? In: Proceedings of the 7th International Conference on Learning Representations. 2019

[5]

Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks. In: Proceedings of the 5th International Conference on Learning Representations. 2017

[6]

Veličković P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y. Graph attention networks. In: Proceedings of the 6th International Conference on Learning Representations. 2018

[7]

Hamilton W L, Ying Z, Leskovec J. Inductive representation learning on large graphs. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017

[8]

Morris C, Ritzert M, Fey M, Hamilton W L, Lenssen J E, Rattan G, Grohe M. Weisfeiler and leman go neural: higher-order graph neural networks. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence. 2019

[9]

Bodnar C, Frasca F, Wang Y, Otter N, Montúfar G F, Lió P, Bronstein M M. Weisfeiler and Lehman go topological: message passing simplicial networks. In: Proceedings of the 38th International Conference on Machine Learning. 2021

[10]

Bodnar C, Frasca F, Otter N, Wang Y G, Liò P, Montúfar G, Bronstein M. Weisfeiler and Lehman go cellular: CW networks. In: Proceedings of the 35th International Conference on Neural Information Processing System. 2021

[11]

Zhang M, Li P. Nested graph neural networks. In: Proceedings of the 35th International Conference on Neural Information Processing System. 2021

[12]

Weisfeiler B, Lehman A . A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsia, 1968, 2( 9): 12–16

[13]

Sato R. A survey on the expressive power of graph neural networks. 2020, arXiv preprint arXiv: 2003.04078

[14]

Wang Z, Cao Q, Shen H, Xu B, Zhang M, Cheng X. Towards efficient and expressive GNNs for graph classification via subgraph-aware weisfeiler-Lehman. In: Proceedings of the 1st Learning on Graphs Conference. 2022

[15]

Zhou C, Wang X, Zhang M. From relational pooling to subgraph GNNs: a universal framework for more expressive graph neural networks. In: Proceedings of the 40th International Conference on Machine Learning. 2023

[16]

Bevilacqua B, Frasca F, Lim D, Srinivasan B, Cai C, Balamurugan G, Bronstein M M, Maron H. Equivariant subgraph aggregation networks. In: Proceedings of the 10th International Conference on Learning Representations. 2022

[17]

Qian C, Geerts F, Rattan G, Morris C, Niepert M. Ordered subgraph aggregation networks. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022

[18]

Zhang B, Feng G, Du Y, He D, Wang L. A complete expressiveness hierarchy for subgraph GNNs via subgraph weisfeiler-Lehman tests. In: Proceedings of the 40th International Conference on Machine Learnin. 2023

[19]

Zhang B, Luo S, Wang L, He D. Rethinking the expressive power of GNNs via graph biconnectivity. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[20]

Frasca F, Bevilacqua B, Bronstein M M, Maron H. Understanding and extending subgraph GNNs by rethinking their symmetries. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022

[21]

Yan Z, Zhou J, Gao L, Tang Z, Zhang M. An efficient subgraph GNN with provable substructure counting power. 2023, arXiv preprint arXiv: 2303.10576

[22]

Cotta L, Morris C, Ribeiro B. Reconstruction for powerful graph representations. In: Proceedings of the 35th Conference on Neural Information Processing Systems. 2021

[23]

Kong L, Feng J, Liu H, Tao D, Chen Y, Zhang M. MAG-GNN: reinforcement learning boosted graph neural network. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023

[24]

Jia Z, Lin S, Ying R, You J, Leskovec J, Aiken A. Redundancy-free computation for graph neural networks. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020, 997−1005

[25]

Zhao L, Jin W, Akoglu L, Shah N. From stars to subgraphs: uplifting any GNN with local structure awareness. In: Proceedings of the 10th International Conference on Learning Representations. 2022

[26]

Sandfelder D, Vijayan P, Hamilton W L. Ego-GNNs: exploiting ego structures in graph neural networks. In: Proceedings of 2021 IEEE International Conference on Acoustics, Speech and Signal Processing. 2021

[27]

Yin H, Zhang M, Wang Y, Wang J, Li P . Algorithm and system co-design for efficient subgraph-based graph representation learning. Proceedings of the VLDB Endowment, 2022, 15( 11): 2788–2796

[28]

Yin H, Zhang M, Wang J, Li P . SUREL+: moving from walks to sets for scalable subgraph-based graph representation learning. Proceedings of the VLDB Endowment, 2023, 16( 11): 2939–2948

[29]

Chen Z, Villar S, Chen L, Bruna J. On the equivalence between graph isomorphism testing and function approximation with GNNs. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019

[30]

Maron H, Ben-Hamu H, Serviansky H, Lipman Y. Provably powerful graph networks. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019

[31]

Bouritsas G, Frasca F, Zafeiriou S, Bronstein M M . Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45( 1): 657–668

[32]

Ying C, Cai T, Luo S, Zheng S, Ke G, He D, Shen Y, Liu T Y. Do transformers really perform bad for graph representation? In: Proceedings of the 35th International Conference on Neural Information Processing Systems. 2021

[33]

Bar-Shalom G, Bevilacqua B, Maron H. Subgraphormer: unifying subgraph GNNs and graph transformers via graph products. In: Proceedings of the 41st International Conference on Machine Learning. 2024

[34]

You J, Gomes-Selman J M, Ying R, Leskovec J. Identity-aware graph neural networks. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2021

[35]

Abu-El-Haija S, Perozzi B, Kapoor A, Alipourfard N, Lerman K, Harutyunyan H, Ver Steeg G, Galstyan A. MixHop: higher-order graph convolutional architectures via sparsified neighborhood mixing. In: Proceedings of the 36th International Conference on Machine Learning. 2019

[36]

Feng J, Chen Y, Li F, Sarkar A, Zhang M. How powerful are K-hop message passing graph neural networks. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022

[37]

Ma L, Sheng Z, Li X, Gao X, Hao Z, Yang L, Nie X, Jiang J, Zhang W, Cui B . Acceleration algorithms in GNNs: a survey. IEEE Transactions on Knowledge and Data Engineering, 2025, 37( 6): 3173–3192

[38]

Liao N, Mo D, Luo S, Li X, Yin P . Scalable decoupling graph neural network with feature-oriented optimization. The VLDB Journal, 2024, 33( 3): 667–683

[39]

Feng W, Dong Y, Huang T, Yin Z, Cheng X, Kharlamov E, Tang J. GRAND+: scalable graph random neural networks. In: Proceedings of the ACM Web Conference 2022. 2022, 3248−3258

[40]

Huang K, Tang J, Liu J, Yang R, Xiao X. Node-wise diffusion for scalable graph learning. In: Proceedings of the ACM Web Conference 2023. 2023, 1723−1733

[41]

Zhang W, Miao X, Shao Y, Jiang J, Chen L, Ruas O, Cui B. Reliable data distillation on graph convolutional network. In: Proceedings of 2020 ACM SIGMOD International Conference on Management of Data. 2020, 1399−1414

[42]

Yang C, Wu Q, Yan J. Geometric knowledge distillation: topology compression for graph neural networks. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022

[43]

Eliasof M, Bodner B J, Treister E . Haar wavelet feature compression for quantized graph convolutional networks. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35( 4): 4542–4553

[44]

Wang S, Eravci B, Guliyev R, Ferhatosmanoglu H. Low-bit quantization for deep graph neural networks with smoothness-aware message propagation. In: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 2023, 2626−2636

[45]

Yang L, Tian Y, Xu M, Liu Z, Hong S, Qu W, Zhang W, Cui B, Zhang M, Leskovec J. VQGraph: graph vector-quantization for bridging GNNs and MLPs. 2023, arXiv preprint arXiv: 2308.02117

[46]

Gao X, Ye G, Chen T, Zhang W, Yu J, Yin H. Rethinking and accelerating graph condensation: a training-free approach with class partition. In: Proceedings of the ACM on Web Conference 2025. 2025, 4359−4373

[47]

Gao X, Chen T, Zang Y, Zhang W, Nguyen Q V H, Zheng K, Yin H. Graph condensation for inductive node representation learning. In: Proceedings of the 40th International Conference on Data Engineering. 2024, 3056−3069

[48]

Liu Y, Bo D, Shi C. Graph condensation via eigenbasis matching. In: Proceedings of the 41st International Conference on Machine Learning. 2024

[49]

Sun Q, Li J, Peng H, Wu J, Ning Y, Yu P S, He L. SUGAR: subgraph neural network with reinforcement pooling and self-supervised mutual information mechanism. In: Proceedings of the Web Conference 2021. 2021

[50]

Morris C, Kriege N M, Bause F, Kersting K, Mutzel P, Neumann M. TUDataset: a collection of benchmark datasets for learning with graphs. 2020, arXiv preprint arXiv: 2007.08663

[51]

Hu W, Fey M, Zitnik M, Dong Y, Ren H, Liu B, Catasta M, Leskovec J. Open graph benchmark: datasets for machine learning on graphs. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020

[52]

Abboud R, Ceylan I I, Grohe M, Lukasiewicz T. The surprising power of graph neural networks with random node initialization. In: Proceedings of the 30th International Joint Conference on Artificial Intelligence. 2021

[53]

Dwivedi V P, Rampášek L, Galkin M, Parviz A, Wolf G, Luu A T, Beaini D. Long range graph benchmark. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022

RIGHTS & PERMISSIONS

Higher Education Press

PDF (7102KB)

Supplementary files

Highlights

311

Accesses

0

Citation

Detail

Sections
Recommended

/