A multi-view heterogeneous and extractive graph attention network for evidential document-level event factuality identification

Zhong QIAN , Peifeng LI , Qiaoming ZHU , Guodong ZHOU

Front. Comput. Sci. ›› 2025, Vol. 19 ›› Issue (6) : 196319

PDF (7039KB)
Front. Comput. Sci. ›› 2025, Vol. 19 ›› Issue (6) : 196319 DOI: 10.1007/s11704-024-3809-6
Artificial Intelligence
RESEARCH ARTICLE

A multi-view heterogeneous and extractive graph attention network for evidential document-level event factuality identification

Author information +
History +
PDF (7039KB)

Abstract

Evidential Document-level Event Factuality Identification (EvDEFI) aims to predict the factual nature of an event and extract evidential sentences from the document precisely. Previous work usually limited to only predicting the factuality of an event with respect to a document, and neglected the interpretability of the task. As a more fine-grained and interpretable task, EvDEFI is still in the early stage. The existing model only used shallow similarity calculation to extract evidences, and employed simple attentions without lexical features, which is quite coarse-grained. Therefore, we propose a novel EvDEFI model named Heterogeneous and Extractive Graph Attention Network (HEGAT), which can update representations of events and sentences by multi-view graph attentions based on tokens and various lexical features from both local and global levels. Experiments on EB-DEF-v2 corpus demonstrate that HEGAT model is superior to several competitive baselines and can validate the interpretability of the task.

Graphical abstract

Keywords

evidential document-level event factuality / heterogeneous graph network / multi-view attentions / speculation and negation

Cite this article

Download citation ▾
Zhong QIAN, Peifeng LI, Qiaoming ZHU, Guodong ZHOU. A multi-view heterogeneous and extractive graph attention network for evidential document-level event factuality identification. Front. Comput. Sci., 2025, 19(6): 196319 DOI:10.1007/s11704-024-3809-6

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Qian Z, Li P, Zhu Q, Zhou G. Document-level event factuality identification via adversarial neural network. In: Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2019, 2799−2809

[2]

Huang R, Zou B, Wang H, Li P, Zhou G. Event factuality detection in discourse. In: Proceedings of the 8th CCF International Conference on Natural Language Processing and Chinese Computing. 2019, 404−414

[3]

Zhang H, Qian Z, Zhu X, Li P. Document-level event factuality identification using negation and speculation scope. In: Proceedings of the 28th International Conference on Neural Information Processing. 2021, 414−425

[4]

Zhang H, Li P, Qian Z, Zhu X. Incorporating factuality inference to identify document-level event factuality. In: Proceedings of the Association for Computational Linguistics. 2023, 13990−14002

[5]

Cao P, Chen Y, Yang Y, Liu K, Zhao J. Uncertain local-to-global networks for document-level event factuality identification. In: Proceedings of 2021 Conference on Empirical Methods in Natural Language Processing. 2021, 2636−2645

[6]

Zhang Z, Liu C, Qian Z, Zhu X, Li P. HS2N: heterogeneous semantics-syntax fusion network for document-level event factuality identification. In: Proceedings of the 19th Pacific Rim International Conference on Artificial Intelligence. 2022, 309−320

[7]

Zhang Z, Qian Z, Zhu X, Li P. CoDE: contrastive learning method for document-level event factuality identification. In: Proceedings of the 28th International Conference on Database Systems for Advanced Applications. 2023, 497−512

[8]

Qian Z, Li P, Zhu Q, Zhou G. Document-level event factuality identification via reinforced multi-granularity hierarchical attention networks. In: Proceedings of the 31st International Joint Conference on Artificial Intelligence. 2022, 4338−4345

[9]

Qian Z, Zhang H, Li P, Zhu Q, Zhou G. Document-level event factuality identification via machine reading comprehension frameworks with transfer learning. In: Proceedings of the 29th International Conference on Computational Linguistics. 2022, 2622−2632

[10]

Zhang H, Qian Z, Li P, Zhu X. Evidence-based document-level event factuality identification. In: Proceedings of the 19th Pacific Rim International Conference on Artificial Intelligence. 2022, 240−254

[11]

Veličković P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y. Graph attention networks. In: Proceedings of the 6th International Conference on Learning Representations. 2018

[12]

Wang X, Ji H, Shi C, Wang B, Ye Y, Cui P, Yu P S. Heterogeneous graph attention network. In: Proceedings of the World Wide Web Conference. 2019, 2022−2032

[13]

Zhao M, Jia A L. Multi-view heterogeneous graph attention network. In: Proceedings of the 26th International Conference on Computer Supported Cooperative Work in Design. 2023, 697−702

[14]

Liu Z, Hu B, Xu Z, Zhang M. PPAT: progressive graph pairwise attention network for event causality identification. In: Proceedings of the 32nd International Joint Conference on Artificial Intelligence. 2023, 5150−5158

[15]

Ying X, Meng Z, Zhao M, Yu M, Pan S, Li X . Gated graph convolutional network with enhanced representation and joint attention for distant supervised heterogeneous relation extraction. World Wide Web, 2023, 26( 1): 401–420

[16]

Wu Y, Fu Y, Xu J, Yin H, Zhou Q, Liu D . Heterogeneous question answering community detection based on graph neural network. Information Sciences, 2023, 621: 652–671

[17]

An W, Tian F, Chen P, Zheng Q . Aspect-based sentiment analysis with heterogeneous graph neural network. IEEE Transactions on Computational Social Systems, 2023, 10( 1): 403–412

[18]

Yin W, Roth D. TwoWingOS: a two-wing optimization strategy for evidential claim verification. In: Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing. 2018, 105−114

[19]

Ma J, Gao W, Joty S, Wong K F. Sentence-level evidence embedding for claim verification with hierarchical attention networks. In: Proceedings of the 57th Conference of the Association for Computational Linguistics. 2019, 2561−2571

[20]

Liu Z, Xiong C, Sun M, Liu Z. Fine-grained fact verification with kernel graph attention network. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020, 7342−7351

[21]

Chen C, Cai F, Hu X, Chen W, Chen H . HHGN: a hierarchical reasoning-based heterogeneous graph neural network for fact verification. Information Processing & Management, 2021, 58( 5): 102659

[22]

Chen Z, Hui S C, Zhuang F, Liao L, Li F, Jia M, Li J. EvidenceNet: evidence fusion network for fact verification. In: Proceedings of the ACM Web Conference. 2022, 2636−2645

[23]

Ma Z, Li J, Li G, Cheng Y. GLAF: global-to-local aggregation and fission network for semantic level fact verification. In: Proceedings of the 29th International Conference on Computational Linguistics. 2022, 1801−1812

[24]

Park E, Lee J H, Jeon D, Kim S, Kang I, Na S H. SISER: semantic-infused selective graph reasoning for fact verification. In: Proceedings of the 29th International Conference on Computational Linguistics. 2022, 1367−1378

[25]

Chen J, Bao Q, Sun C, Zhang X, Chen J, Zhou H, Xiao Y, Li L. LOREN: logic-regularized reasoning for interpretable fact verification. In: Proceedings of the 36th AAAI Conference on Artificial Intelligence. 2022, 10482−10491

[26]

Si J, Zhu Y, Zhou D. Exploring faithful rationale for multi-hop fact verification via salience-aware graph learning. In: Proceedings of the 37th AAAI Conference on Artificial Intelligence. 2023, 13573−13581

[27]

Luong T, Pham H, Manning C D. Effective approaches to attention-based neural machine translation. In: Proceedings of 2015 Conference on Empirical Methods in Natural Language Processing. 2015, 1412−1421

[28]

Devlin J, Chang M W, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2019, 4171−4186

[29]

Qi P, Zhang Y, Zhang Y, Bolton J, Manning C D. Stanza: a python natural language processing toolkit for many human languages. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 2020, 101−108

[30]

Vincze V, Szarvas G, Farkas R, Móra G, Csirik J . The BioScope corpus: biomedical texts annotated for uncertainty, negation and their scopes. BMC Bioinformatics, 2008, 9( S11): S9

[31]

Zou B, Zhu Q, Zhou G. Negation and speculation identification in Chinese language. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2015, 656−665

[32]

Touvron H, Martin L, Stone K, Albert P, Almahairi A, , . Llama 2: open foundation and fine-tuned chat models. 2023, arXiv preprint arXiv: 2307.09288

RIGHTS & PERMISSIONS

Higher Education Press

AI Summary AI Mindmap
PDF (7039KB)

Supplementary files

FCS-23809-OF-ZQ_suppl_1

1244

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/