Entity and relation extraction with rule-guided dictionary as domain knowledge

Xinzhi WANG, Jiahao LI, Ze ZHENG, Yudong CHANG, Min ZHU

PDF(3090 KB)
PDF(3090 KB)
Front. Eng ›› 2022, Vol. 9 ›› Issue (4) : 610-622. DOI: 10.1007/s42524-022-0226-0
RESEARCH ARTICLE
RESEARCH ARTICLE

Entity and relation extraction with rule-guided dictionary as domain knowledge

Author information +
History +

Abstract

Entity and relation extraction is an indispensable part of domain knowledge graph construction, which can serve relevant knowledge needs in a specific domain, such as providing support for product research, sales, risk control, and domain hotspot analysis. The existing entity and relation extraction methods that depend on pretrained models have shown promising performance on open datasets. However, the performance of these methods degrades when they face domain-specific datasets. Entity extraction models treat characters as basic semantic units while ignoring known character dependency in specific domains. Relation extraction is based on the hypothesis that the relations hidden in sentences are unified, thereby neglecting that relations may be diverse in different entity tuples. To address the problems above, this paper first introduced prior knowledge composed of domain dictionaries to enhance characters’ dependence. Second, domain rules were built to eliminate noise in entity relations and promote potential entity relation extraction. Finally, experiments were designed to verify the effectiveness of our proposed methods. Experimental results on two domains, including laser industry and unmanned ship, showed the superiority of our methods. The F1 value on laser industry entity, unmanned ship entity, laser industry relation, and unmanned ship relation datasets is improved by +1%, +6%, +2%, and +1%, respectively. In addition, the extraction accuracy of entity relation triplet reaches 83% and 76% on laser industry entity pair and unmanned ship entity pair datasets, respectively.

Graphical abstract

Keywords

entity extraction / relation extraction / prior knowledge / domain rule

Cite this article

Download citation ▾
Xinzhi WANG, Jiahao LI, Ze ZHENG, Yudong CHANG, Min ZHU. Entity and relation extraction with rule-guided dictionary as domain knowledge. Front. Eng, 2022, 9(4): 610‒622 https://doi.org/10.1007/s42524-022-0226-0

References

[1]
Bunescu, R C Mooney, R J (2005). A shortest path dependency kernel for relation extraction. In: Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing. Vancouver: Association for Computational Linguistics, 724–731
[2]
Eberts, M Ulges, A (2021). An end-to-end model for entity-level relation extraction using multi-instance learning. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 3650–3660
[3]
Geng, Z Chen, G Han, Y Lu, G Li, F (2020). Semantic relation extraction using sequential and tree-structured LSTM with attention. Information Sciences, 509: 183–192
CrossRef Google scholar
[4]
GrishmanR (1995). The NYU system for MUC-6 or where’s the syntax? In: Proceedings of the 6th Conference on Message Understanding. Columbia, MD: Association for Computational Linguistics, 167–175
[5]
Hearst, M A (1992). Automatic acquisition of hyponyms from large text corpora. In: Proceedings of the 14th Conference on Computational Linguistics. Nantes: Association for Computational Linguistics, 539–545
[6]
HumphreysKGaizauskasRAzzamSHuyckCMitchellBCunninghamHWilksY (1998). University of Sheffield: Description of the LaSIE-II system as used for MUC-7. In: Proceedings of the 7th Message Understanding Conference. Fairfax, VA, M98-1007
[7]
Isozaki, H Kazawa, H (2002). Efficient support vector classifiers for named entity recognition. In: Proceedings of the 19th International Conference on Computational Linguistics. Taipei: Association for Computational Linguistics, 1–7
[8]
Kim, S B Han, K S Rim, H C Myaeng, S H (2006). Some effective techniques for Naive Bayes text classification. IEEE Transactions on Knowledge and Data Engineering, 18( 11): 1457–1466
CrossRef Google scholar
[9]
KingmaD PBaJ (2014). Adam: A method for stochastic optimization. arXiv preprint. arXiv:1412.6980
[10]
LaffertyJ DMcCallumAPereiraF C N (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: Proceedings of the 18th International Conference on Machine Learning. San Francisco, CA: Morgan Kaufmann Publishers Inc., 282–289
[11]
LakretzYKruszewskiGDesbordesTHupkesDDehaeneSBaroniM (2019). The emergence of number and syntax units in LSTM language models. In: Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Minneapolis, MN: Association for Computational Linguistics, 11–20
[12]
Li, X Feng, J Meng, Y Han, Q Wu, F Li, J (2020). A unified MRC framework for named entity recognition. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 5849–5859
[13]
Li, Z Yang, F Luo, Y (2019). Context embedding based on Bi-LSTM in semi-supervised biomedical word sense disambiguation. IEEE Access, 7: 72928–72935
CrossRef Google scholar
[14]
Lin, X D Peng, H Liu, B (2006). Chinese named entity recognition using support vector machines. In: International Conference on Machine Learning and Cybernetics. Dalian: IEEE, 4216–4220
[15]
Lison, P Barnes, J Hubin, A Touileb, S (2020). Named entity recognition without labelled data: A weak supervision approach. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 1518–1533
CrossRef Google scholar
[16]
Liu, G Guo, J (2019). Bidirectional LSTM with attention mechanism and convolutional layer for text classification. Neurocomputing, 337: 325–338
CrossRef Google scholar
[17]
LuoYXiaoFZhaoH (2020). Hierarchical contextualized representation for named entity recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence. New York, NY: AAAI Press, 8441–8448
[18]
MikolovTSutskeverIChenKCorradoG SDeanJ (2013). Distributed representations of words and phrases and their compositionality. In: Proceedings of the 26th International Conference on Neural Information Processing Systems. Lake Tahoe, NV: Curran Associates Inc., 3111–3119
[19]
Miller, G A (1995). WordNet: A lexical database for English. Communications of the ACM, 38( 11): 39–41
CrossRef Google scholar
[20]
Nan, G Guo, Z Sekulić, I Lu, W (2020). Reasoning with latent structure refinement for document-level relation extraction. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 1546–1557
[21]
NayakTNgH T (2020). Effective modeling of encoder-decoder architecture for joint entity and relation extraction. In: Proceedings of the AAAI Conference on Artificial Intelligence. New York, NY: AAAI Press, 8528–8535
[22]
Park, S Kim, Y (2019). A method for sharing cell state for LSTM-based language model. In: International Conference on Intelligence Science: Computer and Information Science. Beijing: Springer, 81–94
CrossRef Google scholar
[23]
Reimers, N Gurevych, I (2017). Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. Copenhagen: Association for Computational Linguistics, 338–348
CrossRef Google scholar
[24]
ShenYHanJ (2020). Joint extraction of entity and relation with information redundancy elimination. arXiv preprint. arXiv:2011.13565
[25]
Shibuya, T Hovy, E (2020). Nested named entity recognition via second-best sequence learning and decoding. Transactions of the Association for Computational Linguistics, 8: 605–620
CrossRef Google scholar
[26]
Thomas, A Sangeetha, S (2020). Deep learning architectures for named entity recognition: A survey. In: Advanced Computing and Intelligent Engineering. Singapore: Springer, 215–225
[27]
WaldisAMazzolaL (2021). Nested and balanced entity recognition using multi-task learning. arXiv preprint. arXiv:2106.06216
[28]
Wang, T Hirst, G (2009). Extracting synonyms from dictionary definitions. In: Proceedings of the International Conference RANLP. Borovets: Association for Computational Linguistics, 471–477
[29]
Wang, X Chang, Y Sugumaran, V Luo, X Wang, P Zhang, H (2021a). Implicit emotion relationship mining based on optimal and majority synthesis from multimodal data prediction. IEEE MultiMedia, 28( 2): 96–105
CrossRef Google scholar
[30]
Wang, X Kou, L Sugumaran, V Luo, X Zhang, H (2021b). Emotion correlation mining through deep learning models on natural language text. IEEE Transactions on Cybernetics, 51( 9): 4400–4413
CrossRef Google scholar
[31]
Yamada, H Kudo, T Matsumoto, Y (2002). Japanese named entity extraction using support vector machine. Transactions of Information Processing Society of Japan, 43( 1): 44–53
[32]
Yao, Y Rosasco, L Caponnetto, A (2007). On early stopping in gradient descent learning. Constructive Approximation, 26( 2): 289–315
CrossRef Google scholar
[33]
Zhou, G Su, J Zhang, J Zhang, M (2005). Exploring various knowledge in relation extraction. In: Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics. Ann Arbor, MI: Association for Computational Linguistics, 427–434

RIGHTS & PERMISSIONS

2022 Higher Education Press
AI Summary AI Mindmap
PDF(3090 KB)

Accesses

Citations

Detail

Sections
Recommended

/