E-CGL: an efficient continual graph learner
Jianhao GUO , Zixuan NI , Yun ZHU , Siliang TANG
Front. Inform. Technol. Electron. Eng ›› 2025, Vol. 26 ›› Issue (8) : 1441 -1453.
E-CGL: an efficient continual graph learner
Continual learning (CL) has emerged as a crucial paradigm for learning from sequential data while retaining previous knowledge. Continual graph learning (CGL), characterized by dynamically evolving graphs from streaming data, presents distinct challenges that demand efficient algorithms to prevent catastrophic forgetting. The first challenge stems from the interdependencies between different graph data, in which previous graphs influence new data distributions. The second challenge is handling large graphs in an efficient manner. To address these challenges, we propose an efficient continual graph learner (E-CGL) in this paper. We address the interdependence issue by demonstrating the effectiveness of replay strategies and introducing a combined sampling approach that considers both node importance and diversity. To improve efficiency, E-CGL leverages a simple yet effective multi-layer perceptron (MLP) model that shares weights with a graph neural network (GNN) during training, thereby accelerating computation by circumventing the expensive message-passing process. Our method achieves state-of-the-art results on four CGL datasets under two settings, while significantly lowering the catastrophic forgetting value to an average of −1.1%. Additionally, E-CGL achieves the training and inference speedup by an average of 15.83× and 4.89×, respectively, across four datasets. These results indicate that E-CGL not only effectively manages correlations between different graph data during continual training but also enhances efficiency in large-scale CGL.
Graph neural networks / Continual learning / Dynamic graphs / Continual graph learning / Graph acceleration
Zhejiang University Press
Supplementary files
/
| 〈 |
|
〉 |