Relation extraction is an important task in natural language processing. It plays an integral role in intelligent question-and-answer systems, semantic search, and knowledge graph work. For this task, previous studies have demonstrated the effectiveness of convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory networks (LSTMs) in relational classification tasks. Recently, due to the superior performance of the pre-trained model BERT, BERT has become a feature extraction module for many relational classification models, and good results have been achieved in work related to BERT. However, most of such work uses the deepest levels of features. The important role of shallow-level information in the relational classification task is ignored. Based on the above problems, a relationship classification network FA-RCNet (fusion-attention relationship classification network) with feature fusion and attention mechanism is proposed in this paper. FA-RCNet fuses shallow-level features with deep-level features, and augments entity features and global features by the attention module so that the feature vector can perform the relational classification task more perfectly. In addition, the model in this paper achieves advanced results on both the SemEval-2010 Task 8 dataset and the KBP37 dataset compared to previously published models.
The task of relation classification is an important pre-task in natural language processing tasks. Relation classification can provide a high-quality corpus for tasks such as machine translation, human–computer dialogue, and structured text generation. In the process of the digitalization of standards, identifying the entity relationship in the standard text is an important prerequisite for the formation of subsequent standard knowledge. Only by accurately labeling the relationship between entities can there be higher efficiency and accuracy in the subsequent formation of knowledge bases and knowledge maps. This study proposes a standard text relational classification model based on cascaded word vector attention and feature splicing. The model was compared and ablated on our labeled standard text Chinese dataset. At the same time, in order to prove the performance of the model, the above experiments were carried out on two general English datasets, SemEval-2010 Task 8 and KBP37. On standard text datasets and general datasets, the model proposed in this study achieved excellent results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.