Continual document-level relation extraction with partial labeling compensation

Xinyi WANG , Zitao WANG , Yuan FENG , Wei HU

Front. Comput. Sci. ›› 2027, Vol. 21 ›› Issue (1) : 2101305

PDF (2117KB)
Front. Comput. Sci. ›› 2027, Vol. 21 ›› Issue (1) :2101305 DOI: 10.1007/s11704-025-50828-9
Artificial Intelligence
RESEARCH ARTICLE
Continual document-level relation extraction with partial labeling compensation
Author information +
History +
PDF (2117KB)

Abstract

Document-level relation extraction (RE) aims to identify the relations between entities across multiple sentences. In real life, new relations constantly emerge in new texts, raising the challenge to continually learn the new relations while avoiding forgetting the learned relations. Previous continual RE works have primarily focused on the continual learning of sentence-level RE, where each entity pair is associated with one single sentence and annotated with one relation. However, emerging relations may exist between entity pairs spanning multiple sentences or between entity pairs with pre-existing relations, necessitating the application of continual learning to document-level RE. To this end, we consider continual document-level RE and propose a novel model named CDRE to alleviate the partial labeling problem that severely degrades the performance of RE models. Specifically, we propose multi-binary knowledge distillation to transfer the knowledge of learned relations from the previously trained model to the current model. We introduce asymmetric training to coordinate the influence of positive samples and samples with learned yet unannotated relations. Furthermore, we explore the correlation between relations to augment label generation for re-annotating the learned and newly emerging relations in current and memorized samples, respectively. To simulate real-world scenarios, we construct two benchmark datasets derived from two widely-used document-level RE datasets. Experimental results on the datasets validate the superiority of our model CDRE in coping with continual document-level RE.

Graphical abstract

Keywords

document-level relation extraction / continual learning / knowledge distillation / asymmetric training / correlation / label generation

Cite this article

Download citation ▾
Xinyi WANG, Zitao WANG, Yuan FENG, Wei HU. Continual document-level relation extraction with partial labeling compensation. Front. Comput. Sci., 2027, 21(1): 2101305 DOI:10.1007/s11704-025-50828-9

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Christopoulou F, Miwa M, Ananiadou S. Connecting the dots: document-level neural relation extraction with edge-oriented graphs. In: Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2019, 4925−4936

[2]

Wang D, Hu W, Cao E, Sun W. Global-to-local neural networks for document-level relation extraction. In: Proceedings of 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020, 3711−3721

[3]

Li B, Ye W, Sheng Z, Xie R, Xi X, Zhang S. Graph enhanced dual attention network for document-level relation extraction. In: Proceedings of the 28th International Conference on Computational Linguistics. 2020, 1551−1560

[4]

Zeng S, Xu R, Chang B, Li L. Double graph based reasoning for document-level relation extraction. In: Proceedings of 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020, 1630−1640

[5]

Lu C, Zhang R, Sun K, Kim J, Zhang C, Mao Y. Anaphor assisted document-level relation extraction. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 15453−15464

[6]

Zhou W, Huang K, Ma T, Huang J. Document-level relation extraction with adaptive thresholding and localized context pooling. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2021, 14612−14620

[7]

Tan Q, He R, Bing L, Ng H T. Document-level relation extraction with adaptive focal loss and knowledge distillation. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2022. 2022, 1672−1681

[8]

Xiao Y, Zhang Z, Mao Y, Yang C, Han J. SAIS: supervising and augmenting intermediate steps for document-level relation extraction. In: Proceedings of 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2022, 2395−2409

[9]

Xie Y, Shen J, Li S, Mao Y, Han J. Eider: empowering document-level relation extraction with efficient evidence extraction and inference-stage fusion. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2022. 2022, 257−268

[10]

Ma Y, Wang A, Okazaki N. DREEAM: guiding attention with evidence for improving document-level relation extraction. In: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023, 1971−1983

[11]

Gao C, Wang X, Sun J. TTM-RE: memory-augmented document-level relation extraction. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2024, 443−458

[12]

Zhang F, Jin X, Cheng J, Yu H, Xu H. Rethinking the role of LLMs for document-level relation extraction: a refiner with task distribution and probability fusion. In: Proceedings of 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 2025, 6293−6312

[13]

Lopez-Paz D, Ranzato M. Gradient episodic memory for continual learning. In: Proceedings of the 31st Conference on Neural Information Processing Systems. 2017, 6470−6479

[14]

Rebuffi S A, Kolesnikov A, Sperl G, Lampert C H. iCaRL: incremental classifier and representation learning. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017, 5533−5542

[15]

Castro F M, Marín-Jiménez M J, Guil N, Schmid C, Alahari K. End-to-end incremental learning. In: Proceedings of the 15th European Conference on Computer Vision. 2018, 241−257

[16]

Wu Y, Chen Y, Wang L, Ye Y, Liu Z, Guo Y, Fu Y. Large scale incremental learning. In: Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019, 374−382

[17]

Wang H, Xiong W, Yu M, Guo X, Chang S, Wang W Y. Sentence embedding alignment for lifelong relation extraction. In: Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2019, 796−806

[18]

de Masson d’Autume C, Ruder S, Kong L, Yogatama D. Episodic memory in lifelong language learning. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019, 1177

[19]

Cao P, Chen Y, Zhao J, Wang T. Incremental event detection via knowledge consolidation networks. In: Proceedings of 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020, 707−717

[20]

Li Z, Hoiem D . Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40( 12): 2935–2947

[21]

Kirkpatrick J, Pascanu R, Rabinowitz N, Veness J, Desjardins G, Rusu A A, Milan K, Quan J, Ramalho T, Grabska-Barwinska A, Hassabis D, Clopath C, Kumaran D, Hadsell R . Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences of the United States of America, 2017, 114( 13): 3521–3526

[22]

Rosenfeld A, Tsotsos J K . Incremental learning through deep adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42( 3): 651–663

[23]

Qin Q, Peng H, Hu W, Zhao D, Liu B. BNS: building network structures dynamically for continual learning. In: Proceedings of the 35th International Conference on Neural Information Processing Systems. 2021, 1576

[24]

Cui L, Yang D, Yu J, Hu C, Cheng J, Yi J, Xiao Y. Refining sample embeddings with relation prototypes to enhance continual relation extraction. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021, 232−243

[25]

Zhang H, Liang B, Yang M, Wang H, Xu R F . Prompt-based prototypical framework for continual relation extraction. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022, 30: 2801–2813

[26]

Zhao K, Xu H, Yang J, Gao K. Consistent representation learning for continual relation extraction. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2022. 2022, 3402−3411

[27]

Hu C, Yang D, Jin H, Chen Z, Xiao Y. Improving continual relation extraction through prototypical contrastive learning. In: Proceedings of the 29th International Conference on Computational Linguistics. 2022, 1885−1895

[28]

Wang P, Song Y, Liu T, Lin B, Cao Y, Li S, Sui Z. Learning robust representations for continual relation extraction via adversarial class augmentation. In: Proceedings of 2022 Conference on Empirical Methods in Natural Language Processing. 2022, 6264−6278

[29]

Zhao W, Cui Y, Hu W. Improving continual relation extraction by distinguishing analogous semantics. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023, 1162−1175

[30]

Xiong W, Song Y, Wang P, Li S. Rationale-enhanced language models are better continual relation learners. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 15489−15497

[31]

Song Y, Wang P, Xiong W, Zhu D, Liu T, Sui Z, Li S. InfoCL: alleviating catastrophic forgetting in continual text classification from an information theoretic perspective. In: Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 14557−14570

[32]

Huang M, Xiao M, Wang L, Du Y. DP-CRE: Continual relation extraction via decoupled contrastive learning and memory structure preservation. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). 2024, 5338−5349

[33]

Qin C, Joty S. Continual few-shot relation learning via embedding space regularization and data augmentation. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022, 2776−2789

[34]

Wang X, Wang Z, Hu W. Serial contrastive knowledge distillation for continual few-shot relation extraction. In: Proceedings of Findings of the Association for Computational Linguistics: ACL 2023. 2023, 12693−12706

[35]

Chen X, Wu H, Shi X. Consistent prototype learning for few-shot continual relation extraction. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023, 7409−7422

[36]

Thanh N X, Duc Le A, Tran Q, Le T T, Van L N, Nguyen T H. Few-shot, no problem: descriptive continual relation extraction. In: Proceedings of the 39th AAAI Conference on Artificial Intelligence. 2025, 25282−25290

[37]

Wang F, Xu S, Li P, Zhu Q . Integrating element correlation with prompt-based spatial relation extraction. Frontiers of Computer Science, 2025, 19( 2): 192308

[38]

Wang J, Zhang S, Li R . Gate feature interaction network for relation prediction in knowledge graph. Data Intelligence, 2024, 6( 3): 749–770

[39]

Chen G, Chen P, Li H, Wang X, Zhou X, Yu A, Deng X, Wang Q . Relation semantic guidance and entity position location for relation extraction. Data Science and Engineering, 2025, 10( 2): 175–195

[40]

Yao Y, Ye D, Li P, Han X, Lin Y, Liu Z, Liu Z, Huang L, Zhou J, Sun M. DocRED: a large-scale document-level relation extraction dataset. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019, 764−777

[41]

Tan Q, Xu L, Bing L, Ng H T, Aljunied S M. Revisiting DocRED-addressing the false negative problem in relation extraction. In: Proceedings of 2022 Conference on Empirical Methods in Natural Language Processing. 2022, 8472−8487

[42]

De Lange M, Aljundi R, Masana M, Parisot S, Jia X, Leonardis A, Slabaugh G, Tuytelaars T . A continual learning survey: defying forgetting in classification tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44( 7): 3366–3385

[43]

Du K, Lyu F, Li L, Hu F, Feng W, Xu F, Xi X, Cheng H . Multi-label continual learning using augmented graph convolutional network. IEEE Transactions on Multimedia, 2024, 26: 2978–2992

[44]

Han X, Dai Y, Gao T, Lin Y, Liu Z, Li P, Sun M, Zhou J. Continual relation learning via episodic memory activation and reconsolidation. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020, 6429−6440

[45]

Devlin J, Chang M W, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2019, 4171−4186

[46]

Jia R, Wong C, Poon H. Document-level n-ary relation extraction with multiscale representation learning. In: Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2019, 3693−3704

[47]

Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. 2015, arXiv preprint arXiv: 1503.02531

[48]

Yang P, Xie M K, Zong C C, Feng L, Niu G, Sugiyama M, Huang S J. Multi-label knowledge distillation. In: Proceedings of 2023 IEEE/CVF International Conference on Computer Vision (ICCV). 2023, 17225−17234

[49]

Zhang Y, Chen Q. A neural span-based continual named entity recognition model. In: Proceedings of the 37th AAAI Conference on Artificial Intelligence. 2023, 13993−14001

[50]

Kullback S, Leibler R A . On information and sufficiency. The Annals of Mathematical Statistics, 1951, 22( 1): 79–86

[51]

Wang Y, Ma X, Chen Z, Luo Y, Yi J, Bailey J. Symmetric cross entropy for robust learning with noisy labels. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision (ICCV). 2019, 322−330

RIGHTS & PERMISSIONS

Higher Education Press

PDF (2117KB)

Supplementary files

Highlights

337

Accesses

0

Citation

Detail

Sections
Recommended

/