Boosting cross-domain and cross-task generalization for text-attributed graphs from structural perspective
Yao CHENG , Jiapeng ZHU , Yige ZHAO , Jianxiang YU , Jiaqi TAN , Xiang LI
Front. Comput. Sci. ›› 2027, Vol. 21 ›› Issue (3) : 2103602
Graph models based on large language models (LLMs) have recently garnered considerable attention due to its significant success. Although existing methods resort to LLMs to learn unified semantic representations across domains, they disregard the unique structural characteristics of graphs from different domains. To address this problem, in this paper, we boost graph models from structural perspective and propose BooG. The model constructs virtual super nodes to unify structural characteristics of graph data from different domains. Specifically, the super nodes fuse the information of anchor nodes and class labels, where each anchor node captures the information of a node or a graph instance to be classified. Instead of using the raw graph structure, the super nodes, along with virtual edges, establish a standardized aggregation mechanism that fuses rich information from neighborhoods and associated class labels, accommodating graph structural characteristics inherent to different domains. Additionally, we propose a novel pre-training objective based on contrastive learning, which learns more expressive representations for graph data and generalizes effectively to different domains and downstream tasks. Experimental results on various datasets and tasks demonstrate the superior performance of BooG. We provide our code and data here at the website of github.com/cy623/BooG.
graph learning / graph foundation model / pre-trained graph models
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
|
| [36] |
Jin W, Liu X, Zhao X, Ma Y, Shah N, Tang J. Automated self-supervised learning for graphs. In: Proceedings of the 10th International Conference on Learning Representations. 2022 |
| [37] |
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
| [46] |
Liu H, Feng J, Kong L, Liang N, Tao D, Chen Y, Zhang M. One for all: towards training one graph model for all classification tasks. In: Proceedings of the 12th International Conference on Learning Representations. 2024 |
| [47] |
|
| [48] |
|
| [49] |
|
| [50] |
|
| [51] |
|
| [52] |
|
| [53] |
|
| [54] |
|
| [55] |
|
| [56] |
|
| [57] |
|
| [58] |
|
| [59] |
Reimers N, Gurevych I. Sentence-BERT: sentence embeddings using Siamese BERT-networks. In: Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2019, 3982−3992 |
| [60] |
|
| [61] |
He X, Bresson X, Laurent T, Perold A, LeCun Y, Hooi B. Harnessing explanations: LLM-to-LM interpreter for enhanced text-attributed graph representation learning. In: Proceedings of the 12th International Conference on Learning Representations. 2024 |
| [62] |
|
| [63] |
|
| [64] |
|
| [65] |
|
| [66] |
|
| [67] |
|
| [68] |
|
Higher Education Press
/
| 〈 |
|
〉 |