Large language models for generative information extraction: a survey
Derong XU , Wei CHEN , Wenjun PENG , Chao ZHANG , Tong XU , Xiangyu ZHAO , Xian WU , Yefeng ZHENG , Yang WANG , Enhong CHEN
Front. Comput. Sci. ›› 2024, Vol. 18 ›› Issue (6) : 186357
Large language models for generative information extraction: a survey
Information Extraction (IE) aims to extract structural knowledge from plain natural language texts. Recently, generative Large Language Models (LLMs) have demonstrated remarkable capabilities in text understanding and generation. As a result, numerous works have been proposed to integrate LLMs for IE tasks based on a generative paradigm. To conduct a comprehensive systematic review and exploration of LLM efforts for IE tasks, in this study, we survey the most recent advancements in this field. We first present an extensive overview by categorizing these works in terms of various IE subtasks and techniques, and then we empirically analyze the most advanced methods and discover the emerging trend of IE tasks with LLMs. Based on a thorough review conducted, we identify several insights in technique and promising research directions that deserve further exploration in future studies. We maintain a public repository and consistently update related works and resources on GitHub (LLM4IE repository).
information extraction / large language models / review
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
Huang J, She Q, Jiang W, Wu H, Hao Y, Xu T, Wu F. QDMR-based planning-and-solving prompting for complex reasoning tasks. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 2024, 13395−13406 |
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
Huang K H, Tang S, Peng N. Document-level entity-based extraction as template generation. In: Proceedings of 2021 Conference on Empirical Methods in Natural Language Processing. 2021, 5257−5269 |
| [39] |
|
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
| [46] |
|
| [47] |
|
| [48] |
|
| [49] |
|
| [50] |
|
| [51] |
Yuan S, Yang D, Liang J, Li Z, Liu J, Huang J, Xiao Y. Generative entity typing with curriculum learning. In: Proceedings of 2022 Conference on Empirical Methods in Natural Language Processing. 2022, 3061−3073 |
| [52] |
|
| [53] |
|
| [54] |
|
| [55] |
|
| [56] |
|
| [57] |
|
| [58] |
|
| [59] |
|
| [60] |
|
| [61] |
|
| [62] |
|
| [63] |
Xie T, Li Q, Zhang Y, Liu Z, Wang H. Self-improving for zero-shot named entity recognition with large language models. In: Proceedings of 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2024, 583−593 |
| [64] |
|
| [65] |
|
| [66] |
|
| [67] |
|
| [68] |
|
| [69] |
|
| [70] |
|
| [71] |
|
| [72] |
|
| [73] |
|
| [74] |
|
| [75] |
|
| [76] |
|
| [77] |
|
| [78] |
Oliveira V, Nogueira G, Faleiros T, Marcacini R. Combining prompt-based language models and weak supervision for labeling named entity recognition on legal documents. Artificial Intelligence and Law, 2024: 1-21 |
| [79] |
|
| [80] |
|
| [81] |
|
| [82] |
|
| [83] |
|
| [84] |
|
| [85] |
|
| [86] |
|
| [87] |
|
| [88] |
|
| [89] |
|
| [90] |
|
| [91] |
|
| [92] |
|
| [93] |
Xie T, Zhang J, Zhang Y, Liang Y, Li Q, Wang H. Retrieval augmented instruction tuning for open ner with large language models. 2024, arXiv preprint arXiv:2406.17305 |
| [94] |
|
| [95] |
|
| [96] |
|
| [97] |
|
| [98] |
|
| [99] |
|
| [100] |
|
| [101] |
|
| [102] |
|
| [103] |
|
| [104] |
|
| [105] |
|
| [106] |
|
| [107] |
|
| [108] |
|
| [109] |
|
| [110] |
|
| [111] |
|
| [112] |
|
| [113] |
Li Y, Peng X, Li J, Zuo X, Peng S, Pei D, Tao C, Xu H, Hong N. Relation extraction using large language models: a case study on acupuncture point locations. Journal of the American Medical Informatics Association, 2024: ocae233 |
| [114] |
|
| [115] |
|
| [116] |
|
| [117] |
|
| [118] |
|
| [119] |
|
| [120] |
|
| [121] |
|
| [122] |
|
| [123] |
|
| [124] |
|
| [125] |
|
| [126] |
|
| [127] |
|
| [128] |
|
| [129] |
|
| [130] |
|
| [131] |
|
| [132] |
|
| [133] |
|
| [134] |
|
| [135] |
|
| [136] |
|
| [137] |
|
| [138] |
|
| [139] |
|
| [140] |
|
| [141] |
|
| [142] |
|
| [143] |
|
| [144] |
|
| [145] |
|
| [146] |
|
| [147] |
|
| [148] |
|
| [149] |
|
| [150] |
|
| [151] |
|
| [152] |
|
| [153] |
|
| [154] |
|
| [155] |
|
| [156] |
|
| [157] |
|
| [158] |
|
| [159] |
|
| [160] |
|
| [161] |
Li J, Jia Z, Zheng Z. Semi-automatic data enhancement for document-level relation extraction with distant supervision from large language models. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 5495−5505 |
| [162] |
Tang R, Han X, Jiang X, Hu X. Does synthetic data generation of LLMs help clinical text mining? 2023, arXiv preprint arXiv: 2303.04360 |
| [163] |
|
| [164] |
Evans J, Sadruddin S, D’Souza J. Astro-NER–astronomy named entity recognition: is GPT a good domain expert annotator? 2024, arXiv preprint arXiv: 2405.02602 |
| [165] |
|
| [166] |
|
| [167] |
|
| [168] |
|
| [169] |
|
| [170] |
|
| [171] |
|
| [172] |
|
| [173] |
|
| [174] |
|
| [175] |
|
| [176] |
|
| [177] |
Nie B, Shao Y, Wang Y. Know-adapter: towards knowledge-aware parameter-efficient transfer learning for few-shot named entity recognition. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 2024, 9777−9786 |
| [178] |
|
| [179] |
|
| [180] |
|
| [181] |
|
| [182] |
|
| [183] |
|
| [184] |
|
| [185] |
|
| [186] |
Labrak Y, Rouvier M, Dufour R. A zero-shot and few-shot study of instruction-finetuned large language models applied to clinical and biomedical tasks. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 2024 |
| [187] |
|
| [188] |
|
| [189] |
|
| [190] |
|
| [191] |
|
| [192] |
|
| [193] |
|
| [194] |
|
| [195] |
|
| [196] |
|
| [197] |
|
| [198] |
|
| [199] |
|
| [200] |
|
| [201] |
|
| [202] |
|
| [203] |
|
| [204] |
|
| [205] |
|
| [206] |
|
| [207] |
|
| [208] |
|
| [209] |
|
| [210] |
|
| [211] |
|
| [212] |
|
| [213] |
|
| [214] |
|
| [215] |
|
| [216] |
|
| [217] |
|
| [218] |
Liu C, Xie Z, Zhao S, Zhou J, Xu T, Li M, Chen E. Speak from heart: an emotion-guided LLM-based multimodal method for emotional dialogue generation. In: Proceedings of 2024 International Conference on Multimedia Retrieval. 2024, 533−542 |
| [219] |
|
| [220] |
|
| [221] |
|
| [222] |
|
| [223] |
|
| [224] |
|
| [225] |
|
| [226] |
|
| [227] |
|
| [228] |
|
| [229] |
|
| [230] |
|
| [231] |
|
| [232] |
|
| [233] |
|
| [234] |
|
| [235] |
|
| [236] |
|
| [237] |
|
| [238] |
|
| [239] |
Sang E F T K, De Meulder F. Introduction to the CoNLL-2003 shared task: language-independent named entity recognition. In: Proceedings of the 7th Conference on Natural Language Learning. 2003, 142−147 |
| [240] |
Roth D, Yih W T. A linear programming formulation for global inference in natural language tasks. In: Proceedings of the 8th Conference on Computational Natural Language Learning. 2004, 1−8 |
| [241] |
Walker C, Strassel S, Medero J, Maeda K. Ace 2005 multilingual training corpus-linguistic data consortium. See catalog.ldc.upenn.edu/LDC2006T06 website, 2005 |
| [242] |
|
| [243] |
|
| [244] |
|
| [245] |
|
| [246] |
|
| [247] |
|
| [248] |
|
| [249] |
|
| [250] |
|
| [251] |
|
| [252] |
Chen P, Xu H, Zhang C, Huang R. Crossroads, buildings and neighborhoods: a dataset for fine-grained location recognition. In: Proceedings of 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2022, 3329−3339 |
| [253] |
|
| [254] |
|
| [255] |
|
| [256] |
Pradhan S, Moschitti A, Xue N, Ng H T, Björkelund A, Uryupina O, Zhang Y, Zhong Z. Towards robust linguistic analysis using OntoNotes. In: Proceedings of the 17th Conference on Computational Natural Language Learning. 2013, 143−152 |
| [257] |
|
| [258] |
|
| [259] |
|
| [260] |
|
| [261] |
|
| [262] |
|
| [263] |
|
| [264] |
|
| [265] |
|
| [266] |
|
| [267] |
|
| [268] |
|
| [269] |
|
| [270] |
Stoica G, Platanios E A, Poczos B. Re-TACRED: addressing shortcomings of the TACRED dataset. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2021, 13843−13850 |
| [271] |
Luan Y, He L, Ostendorf M, Hajishirzi H. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In: Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing. 2018, 3219−3232 |
| [272] |
|
| [273] |
Zhang Y, Zhong V, Chen D, Angeli G, Manning C D. Position-aware attention and supervised data improve slot filling. In: Proceedings of 2017 Conference on Empirical Methods in Natural Language Processing. 2017, 35−45 |
| [274] |
|
| [275] |
|
| [276] |
|
| [277] |
Kim J D, Wang Y, Yamamoto Y. The genia event extraction shared task, 2013 edition -overview. In: Proceedings of BioNLP Shared Task 2013 Workshop. 2013, 8−15 |
| [278] |
|
| [279] |
|
| [280] |
Zamai A, Zugarini A, Rigutini L, Ernandes M, Maggini M. Show less, instruct more: Enriching prompts with definitions and guidelines for zero-shot ner. 2024, arXiv preprint arXiv:2407.01272 |
| [281] |
|
| [282] |
|
| [283] |
Xue L, Constant N, Roberts A, Kale M, Al-Rfou R, Siddhant A, Barua A, Raffel C. mT5: a massively multilingual pre-trained text-to-text transformer. In: Proceedings of 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2021, 483−498 |
| [284] |
|
| [285] |
|
| [286] |
|
| [287] |
Taori R, Gulrajani I, Zhang T, Dubois Y, Li X. Stanford alpaca: An instruction-following llama model. See github. com/tatsu-lab/stanford_alpaca website, 2023 |
| [288] |
Chiang W L, Li Z, Lin Z, Sheng Y, Wu Z, Zhang H, Zheng L, Zhuang S, Zhuang Y, Gonzalez J E, Stoica I, Xing E P. Vicuna: an open-source chatbot impressing GPT-4 with 90%* ChatGPT quality. See vicuna. lmsys. org websit, 2023 |
| [289] |
|
| [290] |
|
| [291] |
|
| [292] |
|
| [293] |
|
| [294] |
|
The Author(s) 2024. This article is published with open access at link.springer.com and journal.hep.com.cn
/
| 〈 |
|
〉 |