Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1004
|View full text |Cite
|
Sign up to set email alerts
|

Position-aware Attention and Supervised Data Improve Slot Filling

Abstract: Organized relational knowledge in the form of "knowledge graphs" is important for many applications. However, the ability to populate knowledge bases with facts automatically extracted from documents has improved frustratingly slowly. This paper simultaneously addresses two issues that have held back prior work. We first propose an effective new model, which combines an LSTM sequence model with a form of entity position-aware attention that is better suited to relation extraction. Then we build TACRED, a large… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
777
0
2

Year Published

2018
2018
2020
2020

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 650 publications
(781 citation statements)
references
References 21 publications
2
777
0
2
Order By: Relevance
“…We first conduct experiments on the widely used benchmark data set Riedel (Riedel et al, 2010), and then on the TARCED (Zhang et al, 2017) data set. The latter allows us to control the noise level in the labels to observe the behavior and working mechanism of our proposed method.…”
Section: Experimental Studymentioning
confidence: 99%
“…We first conduct experiments on the widely used benchmark data set Riedel (Riedel et al, 2010), and then on the TARCED (Zhang et al, 2017) data set. The latter allows us to control the noise level in the labels to observe the behavior and working mechanism of our proposed method.…”
Section: Experimental Studymentioning
confidence: 99%
“…In contrast to their work, we extend the convolutional neural network in this paper to not only doing relation classification but jointly learning to classify entities and relations. Recently, Zhang et al (2017) propose position-aware attention which calculates attention weights based on the current hidden state of their LSTM, the output state of the LSTM and the position embeddings which encode the distance of the current word to the two relation arguments. Moreover, they publish a supervised relation extraction dataset, obtained by crowdsourcing, for training slot filling relation classification models.…”
Section: Related Workmentioning
confidence: 99%
“…Note that there are many other variants of such models for RE in the literature Zhang et al, 2017;Verga et al, 2018). However, as our goal in this paper is to evaluate different pooling mechanisms for RE, we focus on these standard representation learning methods to avoid the confounding effect of the complicated models, thus better revealing the effectiveness of the pooling methods.…”
Section: The Representation Component For Rementioning
confidence: 99%
“…Due to its important applications on many areas of natural language processing (e.g., question answering, knowledge base construction), RE has been actively studied in the last decade, featuring a variety of feature-based or kernel-based models for this problem (Zelenko et al, 2002;Zhou et al, 2005;Bunescu and Mooney, 2005;Sun et al, 2011;Chan and Roth, 2010;Nguyen et al, 2009). Recently, the introduction of deep learning has produced a new generation of models for RE with the state-of-the-art performance on many different benchmark datasets (Zeng et al, 2014;dos Santos et al, 2015;Xu et al, 2015;Liu et al, 2015;Zhou et al, 2016;Wang et al, 2016;Zhang et al, 2017Zhang et al, , 2018b. The advantage of deep learning over the previous approaches for RE is the ability to automatically learn effective features for the sentences from data via various network architectures.…”
Section: Introductionmentioning
confidence: 99%