2020
DOI: 10.1609/aaai.v34i05.6471
|View full text |Cite
|
Sign up to set email alerts
|

Joint Entity and Relation Extraction with a Hybrid Transformer and Reinforcement Learning Based Model

Abstract: Joint extraction of entities and relations is a task that extracts the entity mentions and semantic relations between entities from the unstructured texts with one single model. Existing entity and relation extraction datasets usually rely on distant supervision methods which cannot identify the corresponding relations between a relation and the sentence, thus suffers from noisy labeling problem. We propose a hybrid deep neural network model to jointly extract the entities and relations, and the model is also … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(10 citation statements)
references
References 23 publications
0
7
0
Order By: Relevance
“…Over the past several years, the deep learning-based entity relationship extraction has become a research hotspot, as it alleviates the problem of error labelling and feature extraction error propagation in a distant supervised data set. The approaches mainly include convolutional neural networks (Li et al , 2020a, 2020b; Santos et al , 2015; Zeng et al , 2014), recurrent neural networks (Katiyar and Cardie, 2017; Tourille et al , 2017), generative adversarial networks (Qin et al , 2018), deep reinforcement learning (Xiao et al , 2020) and so on. For example, Zeng et al (2014) proposed using the word vector and the word position vector as the input of the convolutional neural network (CNN) and introduced the distance information between the entity and other words, which can well take the entity information in the sentence into account for relation extraction.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Over the past several years, the deep learning-based entity relationship extraction has become a research hotspot, as it alleviates the problem of error labelling and feature extraction error propagation in a distant supervised data set. The approaches mainly include convolutional neural networks (Li et al , 2020a, 2020b; Santos et al , 2015; Zeng et al , 2014), recurrent neural networks (Katiyar and Cardie, 2017; Tourille et al , 2017), generative adversarial networks (Qin et al , 2018), deep reinforcement learning (Xiao et al , 2020) and so on. For example, Zeng et al (2014) proposed using the word vector and the word position vector as the input of the convolutional neural network (CNN) and introduced the distance information between the entity and other words, which can well take the entity information in the sentence into account for relation extraction.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The experimental results demonstrated that the proposed strategy performs better than other state-of-the-art methods. Xiao et al (2020) proposed a hybrid deep learning model by using a deep reinforcement learning module adapted from a transformer-based encoder and LSTM-based entity embedding, which improves the relation classification and noisy data filtering. Generally, the deep learning-based methods have achieved better results, compared with traditional machine learning.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Another related direction is the policy-based reinforcement learning (Williams, 1992;Sutton et al, 2000). Previous methods (Xiong et al, 2017;Xiao et al, 2020) usually affect the training of the main task by a policy network with separate parameters. Different from them, we directly regard the main network as the policy network and its output graph as the action distribution.…”
Section: Related Workmentioning
confidence: 99%
“…External knowledge like entity descriptions and type information has been used for RE (Yaghoobzadeh et al, 2017;Vashishth et al, 2018). Pre-trained language models contain a notable amount of semantic information and commonsense knowledge, and several works have applied them to RE (Alt et al, 2019;Xiao et al, 2020).…”
Section: Introductionmentioning
confidence: 99%