2022
DOI: 10.3390/app122312460
|View full text |Cite
|
Sign up to set email alerts
|

FA-RCNet: A Fused Feature Attention Network for Relationship Classification

Abstract: Relation extraction is an important task in natural language processing. It plays an integral role in intelligent question-and-answer systems, semantic search, and knowledge graph work. For this task, previous studies have demonstrated the effectiveness of convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory networks (LSTMs) in relational classification tasks. Recently, due to the superior performance of the pre-trained model BERT, BERT has become a feature extract… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 59 publications
0
0
0
Order By: Relevance
“…Compared with the BiLSTM-attention and multi-attention CNN based on bidirectional long-short-term memory network and attention mechanism, it increased by 3.7% and 2.7%, respectively. Compared with CRSAtt [37] and FA-RCNet [38] based on the BERT pre-trained model, it increased by 0.8% and 0.9%, respectively. The experimental results on the emEval-2010 Task 8 dataset showed that the relationship classification model proposed in this paper also achieved good results on the SemEval-2010 Task 8 dataset.…”
Section: Comparative Experiments Results and Analysismentioning
confidence: 98%
“…Compared with the BiLSTM-attention and multi-attention CNN based on bidirectional long-short-term memory network and attention mechanism, it increased by 3.7% and 2.7%, respectively. Compared with CRSAtt [37] and FA-RCNet [38] based on the BERT pre-trained model, it increased by 0.8% and 0.9%, respectively. The experimental results on the emEval-2010 Task 8 dataset showed that the relationship classification model proposed in this paper also achieved good results on the SemEval-2010 Task 8 dataset.…”
Section: Comparative Experiments Results and Analysismentioning
confidence: 98%