Exploring Rationale Learning for Continual Graph Learning

0citations
PDFProject
0
citations
#2074
in AAAI 2025
of 3028 papers
5
Top Authors
2
Data Points

Abstract

Catastrophic forgetting poses a significant challenge for graph neural networks in continuously updating their knowledge base with data streams. To address this issue, much of the research has focused on node-level continual learning using parameter regularization or rehearsal-based strategies, while little attention given to graph-level tasks. Furthermore, current paradigms for continual graph learning may inadvertently capture spurious correlations for specific tasks through shortcuts, thereby exacerbating the forgetting of previous knowledge when new tasks are introduced. To tackle these challenges, we propose a novel paradigm, Rationale Learning GNN (RL-GNN), for graph-level continual graph learning. Specifically, we harness the invariant learning principle to incorporate environmental interventions into both the current and historical distributions, aiming to uncover rationales by minimizing empirical risk across all environments. The rationale serves as the sole factor guiding the learning process. Therefore, continual graph learning is redefined as capturing these invariant rationales within task sequences, alleviating catastrophic forgetting caused by spurious features. Extensive experiments on real-world datasets with varying task lengths demonstrate the effectiveness of our RL-GNN in continuous knowledge assimilation and reduction of catastrophic forgetting.

Citation History

Jan 27, 2026
0
Feb 4, 2026
0