Full Text:   <402>

Summary:  <209>

Suppl. Mater.: 

CLC number: TP181

On-line Access: 2025-06-04

Received: 2025-05-14

Revision Accepted: 2025-06-03

Crosschecked: 2025-09-04

Cited: 0

Clicked: 435

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Jianhao GUO

https://orcid.org/0000-0002-4285-5328

Siliang TANG

https://orcid.org/0000-0002-7356-9711

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2025 Vol.26 No.8 P.1441-1453

http://doi.org/10.1631/FITEE.2500162


E-CGL: an efficient continual graph learner


Author(s):  Jianhao GUO, Zixuan NI, Yun ZHU, Siliang TANG

Affiliation(s):  Digital Media Computing & Design Lab, College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China

Corresponding email(s):   guojianhao@zju.edu.cn, zixuan2i@zju.edu.cn, zhuyun_dcd@zju.edu.cn, siliang@zju.edu.cn

Key Words:  Graph neural networks, Continual learning, Dynamic graphs, Continual graph learning, Graph acceleration


Jianhao GUO, Zixuan NI, Yun ZHU, Siliang TANG. E-CGL: an efficient continual graph learner[J]. Frontiers of Information Technology & Electronic Engineering, 2025, 26(8): 1441-1453.

@article{title="E-CGL: an efficient continual graph learner",
author="Jianhao GUO, Zixuan NI, Yun ZHU, Siliang TANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="26",
number="8",
pages="1441-1453",
year="2025",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2500162"
}

%0 Journal Article
%T E-CGL: an efficient continual graph learner
%A Jianhao GUO
%A Zixuan NI
%A Yun ZHU
%A Siliang TANG
%J Frontiers of Information Technology & Electronic Engineering
%V 26
%N 8
%P 1441-1453
%@ 2095-9184
%D 2025
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2500162

TY - JOUR
T1 - E-CGL: an efficient continual graph learner
A1 - Jianhao GUO
A1 - Zixuan NI
A1 - Yun ZHU
A1 - Siliang TANG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 26
IS - 8
SP - 1441
EP - 1453
%@ 2095-9184
Y1 - 2025
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2500162


Abstract: 
continual learning (CL) has emerged as a crucial paradigm for learning from sequential data while retaining previous knowledge. continual graph learning (CGL), characterized by dynamically evolving graphs from streaming data, presents distinct challenges that demand efficient algorithms to prevent catastrophic forgetting. The first challenge stems from the interdependencies between different graph data, in which previous graphs influence new data distributions. The second challenge is handling large graphs in an efficient manner. To address these challenges, we propose an efficient continual graph learner (E-CGL) in this paper. We address the interdependence issue by demonstrating the effectiveness of replay strategies and introducing a combined sampling approach that considers both node importance and diversity. To improve efficiency, E-CGL leverages a simple yet effective multi-layer perceptron (MLP) model that shares weights with a graph neural network (GNN) during training, thereby accelerating computation by circumventing the expensive message-passing process. Our method achieves state-of-the-art results on four CGL datasets under two settings, while significantly lowering the catastrophic forgetting value to an average of -1.1%. Additionally, E-CGL achieves the training and inference speedup by an average of 15.83× and 4.89×, respectively, across four datasets. These results indicate that E-CGL not only effectively manages correlations between different graph data during continual training but also enhances efficiency in large-scale CGL.

E-CGL:一个高效的图连续学习器

郭鉴豪,倪子烜,朱赟,汤斯亮
浙江大学计算机科学与技术学院数字媒体计算与设计实验室,中国杭州市,310027
摘要:连续学习已成为从序列数据中学习新知识并保留先前知识的关键范式。图连续学习(CGL)具有流式数据带来的动态演化图特征,其独特挑战要求高效算法以防止灾难性遗忘。首要挑战源于不同图数据间的相互依赖性--历史图数据会影响新数据的特征分布。第二个挑战在于如何高效地处理大规模图数据。为应对这些挑战,本文提出一种高效的图连续学习器(E-CGL)。通过验证回放策略的有效性,提出兼顾节点重要性与多样性的组合采样方法,成功解决图数据相互依赖问题。在效率提升方面,E-CGL采用一种简单而有效的多层感知机模型,该模型与图神经网络共享权重,在训练过程中解耦耗时的消息传递机制实现计算加速。本方法在两种实验设置下的4个数据集上取得先进的成果,同时将灾难性遗忘率显著降低至−1.1%的平均水平。此外,在4个数据集上,E-CGL的训练与推理速度分别提升了15.83倍和4.89倍。这些结果表明,E-CGL不仅在模型更新过程中有效保留图数据间的关联性,更在大规模图连续学习场景中显著提升效率。

关键词:图神经网络;连续学习;动态图;图连续学习;图学习加速

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Aljundi R, Babiloni F, Elhoseiny M, et al., 2018. Memory aware synapses: learning what (not) to forget. Proc 15th European Conf on Computer Vision, p.144-161.

[2]Bojchevski A, Günnemann S, 2018. Deep Gaussian embedding of graphs: unsupervised inductive learning via ranking. 6th Int Conf on Learning Representations.

[3]Buzzega P, Boschini M, Porrello A, et al., 2020. Dark experience for general continual learning: a strong, simple baseline. Proc 34th Int Conf on Neural Information Processing Systems, Article 1335.

[4]Choi S, Kim W, Kim S, et al., 2024. DSLR: diversity enhancement and structure learning for rehearsal-based graph continual learning. Proc ACM Web Conf, p.733-744.

[5]Fan ZH, Wei ZY, Chen JJ, et al., 2022. A unified continuous learning framework for multi-modal knowledge discovery and pre-training. https://arxiv.org/abs/2206.05555

[6]Gao XY, Chen T, Zang YL, et al., 2024. Graph condensation for inductive node representation learning. IEEE 40th Int Conf on Data Engineering, p.3056-3069.

[7]Hamilton WL, Ying R, Leskovec J, 2017. Inductive representation learning on large graphs. Proc 31st Int Conf on Neural Information Processing Systems, p.1025-1035.

[8]Han XT, Zhao T, Liu YZ, et al., 2023. MLPInit: embarrassingly simple GNN training acceleration with MLP initialization. The 11th Int Conf on Learning Representations.

[9]Han XX, Feng Z, Ning Y, 2024. A topology-aware graph coarsening framework for continual graph learning. Proc 38th Int Conf on Neural Information Processing Systems, Article 4212.

[10]Hsu CC, Lai YA, Chen WH, et al., 2017. Unsupervised ranking using graph structures and node attributes. Proc 10th ACM Int Conf on Web Search and Data Mining, p.771-779.

[11]Hu WH, Fey M, Zitnik M, et al., 2020. Open graph benchmark: datasets for machine learning on graphs. Proc 34th Int Conf on Neural Information Processing Systems, Article 1855.

[12]Hu XT, Tang KH, Miao CY, et al., 2021. Distilling causal effect of data in class-incremental learning. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.3956-3965.

[13]Kim S, Yun S, Kang J, 2022. DyGRAIN: an incremental learning framework for dynamic graphs. Proc 31st Int Joint Conf on Artificial Intelligence, p.3157-3163.

[14]Kipf TN, Welling M, 2017. Semi-supervised classification with graph convolutional networks. 5th Int Conf on Learning Representations, p.3157-3163.

[15]Kirkpatrick J, Pascanu R, Rabinowitz N, et al., 2017. Overcoming catastrophic forgetting in neural networks. Proc Natl Acad Sci USA, 114(13):3521-3526.

[16]Li ZZ, Hoiem D, 2018. Learning without forgetting. IEEE Trans Patt Anal Mach Intell, 40(12):2935-2947.

[17]Liu HH, Yang YD, Wang XC, 2021. Overcoming catastrophic forgetting in graph neural networks. Proc AAAI Conf on Artificial Intelligence, p.8653-8661.

[18]Liu YL, Qiu RH, Huang Z, 2023. CaT: balanced continual graph learning with graph condensation. IEEE Int Conf on Data Mining, p.1157-1162.

[19]Lopez-Paz D, Ranzato M, 2017. Gradient episodic memory for continual learning. Proc 31st Int Conf on Neural Information Processing Systems, p.6470-6479.

[20]Ni ZX, Wei LH, Tang SL, et al., 2023. Continual vision-language representation learning with off-diagonal information. Proc 40th Int Conf on Machine Learning, Article 1087.

[21]Niepert M, Ahmed M, Kutzkov K, 2016. Learning convolutional neural networks for graphs. Proc 33rd Int Conf on Machine Learning, p.2014-2023.

[22]Page L, Brin S, Motwani R, et al., 1999. The PageRank Citation Ranking: Bringing Order to the Web. Technical Report, SIDL-WP-1999-0120. http://ilpubs.stanford.edu:8090/422/

[23]Rahimi A, Recht B, 2007. Random features for large-scale kernel machines. Proc 21st Int Conf on Neural Information Processing Systems, p.1177-1184.

[24]Rakaraddi A, Siew Kei L, Pratama M, et al., 2022. Reinforced continual learning for graphs. Proc 31st ACM Int Conf on Information & Knowledge Management, p.1666-1674.

[25]Rebuffi SA, Kolesnikov A, Sperl G, et al., 2017. iCaRL: incremental classifier and representation learning. IEEE Conf on Computer Vision and Pattern Recognition, p.5533-5542.

[26]Srinivasan T, Chang TY, Alva LP, et al., 2022. CLiMB: a continual learning benchmark for vision-and-language tasks. Proc 36th Int Conf on Neural Information Processing Systems, Article 2135.

[27]Su JW, Zou DF, Wu C, 2025. On the limitation and experience replay for GNNs in continual learning. Proc 3rd Conf on Lifelong Learning Agents, p.342-366.

[28]Thrun S, 1994. A lifelong learning perspective for mobile robot control. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.22-30.

[29]Veličković P, Cucurull G, Casanova A, et al., 2018. Graph attention networks.

[30]Wang JS, Song GJ, Wu Y, et al., 2020. Streaming graph neural networks via continual learning. Proc 29th ACM Int Conf on Information & Knowledge Management, p.1515-1524.

[31]Wang ZZ, Sun YY, Zhang XK, et al., 2025. Continual learning with high-order experience replay for dynamic network embedding. Patt Recogn, 159:111093.

[32]Wu F, Zhang TY, de Souza AHJr, et al., 2019. Simplifying graph convolutional networks. Proc 36th Int Conf on Machine Learning, p.6861-6871.

[33]Xu K, Hu WH, Leskovec J, et al., 2019. How powerful are graph neural networks? 7th Int Conf on Learning Representations.

[34]Yang CX, Wu QT, Wang JH, et al., 2023. Graph neural networks are inherently good generalizers: insights by bridging GNNs and MLPs.

[35]Yuan Q, Guan SU, Ni P, et al., 2023. Continual graph learning: a survey. https://arxiv.org/abs/2301.12230

[36]Zhang XK, Song DJ, Tao DC, 2022a. CGLB: benchmark tasks for continual graph learning. Proc 36th Int Conf on Neural Information Processing Systems, Article 945.

[37]Zhang XK, Song DJ, Tao DC, 2022b. Sparsified subgraph memory for continual graph representation learning. IEEE Int Conf on Data Mining, p.1335-1340.

[38]Zhou F, Cao CT, 2021. Overcoming catastrophic forgetting in graph neural networks with experience replay. Proc 35th AAAI Conf on Artificial Intelligence, p.4714-4722.

[39]Zhu J, Yan YJ, Zhao LX, et al., 2020. Beyond homophily in graph neural networks: current limitations and effective designs. Proc 34th Int Conf on Neural Information Processing Systems, Article 653.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2025 Journal of Zhejiang University-SCIENCE