Exact acceleration of subgraph graph neural networks by eliminating computation redundancy
Qian TAO , Xiyuan WANG , Muhan ZHANG , Shuxian HU , Wenyuan YU , Jingren ZHOU
Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (12) : 2012626
Exact acceleration of subgraph graph neural networks by eliminating computation redundancy
Graph neural networks (GNNs) have become a prevalent framework for graph tasks. Many recent studies have proposed the use of graph convolution methods over the numerous subgraphs of each graph, known as subgraph graph neural networks. Despite their impressive performance, subgraph GNNs face challenges of both storage and computational inefficiencies due to the vast number and large size of subgraphs. In response to this problem, this paper introduces Ego-Nets-Fit-All (ENFA), a model that uniformly takes the small ego nets as subgraphs, thereby providing greater storage and computational efficiency, while at the same time guarantees identical outputs to the original subgraph GNNs. Experiments reveal that ENFA can reduce storage space by 29.0% to 84.5% and improve training efficiency by up to 1.66 ×.
graph learning / graph neural network / subgraph GNN / model training acceleration
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
| [46] |
|
| [47] |
|
| [48] |
|
| [49] |
|
| [50] |
|
| [51] |
|
| [52] |
|
| [53] |
|
Higher Education Press
/
| 〈 |
|
〉 |