Joint task and distribution generalization via graph substructure prompting
Yu-Luo CHEN , Ji-Xi LIU , Cheng YANG , Ya-Wen LI , Ting BAI , Chuan SHI
Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (8) : 2008344
Joint task and distribution generalization via graph substructure prompting
Driven by the remarkable task-level generalization ability of large language models, an emerging trend of graph learning is to enable fast adaptation to new tasks with limited annotations, and has found applications across a spectrum of domains. Graph meta-learning and graph prompting techniques have demonstrated potential in task generalization by transferring knowledge acquired from prior experiences to new tasks. However, these methods often overlook distribution shifts between training and testing data in real-world scenarios. To fill this gap, we delve into a novel and practical challenge, namely joint task and distribution generalization. Motivated by recent studies that explicitly identifying key substructures related to task prediction can help generalization, we introduce a refiner module to highlight key substructures robust to distribution shifts. To efficiently adapt the refiner to new tasks, we introduce a few extra parameters as prompt vectors to instruct its behavior. Specifically, we employ a global prompt to acquire universal knowledge and task-specific prompts to capture task-relevant information. We pretrain model parameters on known tasks, and efficiently adapt to a target task by merely learning a corresponding classifier and task-specific prompt. Extensive experiments in task generalization show that, the proposed Graph Substructure Prompting (GSP) significantly outperforms recent state-of-the-art (SOTA) methods on both in-distribution (ID) and out-of-distribution (OOD) data, instead of a trade-off between them. GSP also enjoys comparable or even less computational cost as compared to baselines.
graph neural networks / graph prompting / task generalization / out-of-distribution generalization / few-shot learning
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
Wu Y, Wang X, Zhang A, He X, Chua T S. Discovering invariant rationales for graph neural networks. In: Proceedings of the 10th International Conference on Learning Representations. 2022 |
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
Yu J, Liang J, He R. Mind the label shift of augmentation-based graph OOD generalization. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 11620−11630 |
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
Huang R, Xia M, Nguyen DT, Zhao T, Sakamuru S, Zhao J, Shahane SA, Rossoshek A, Simeonov A. Tox21Challenge to build predictive models of nuclear receptor and stress response pathways as mediated by exposure to environmental chemicals and drugs. Frontiers in Environmental Science. 2016 Jan 14;3:85. |
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
| [46] |
|
| [47] |
|
| [48] |
|
| [49] |
|
| [50] |
|
Higher Education Press
/
| 〈 |
|
〉 |