Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.298
|View full text |Cite
|
Sign up to set email alerts
|

Learning from Context or Names? An Empirical Study on Neural Relation Extraction

Abstract: Neural models have achieved remarkable success on relation extraction (RE) benchmarks. However, there is no clear understanding which type of information affects existing RE models to make decisions and how to further improve the performance of these models. To this end, we empirically study the effect of two main information sources in text: textual context and entity mentions (names). We find that (i) while context is the main source to support the predictions, RE models also heavily rely on the information … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
89
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 117 publications
(92 citation statements)
references
References 31 publications
3
89
0
Order By: Relevance
“…Some focus on obtaining better span representations, including entity mentions, via span-based pre-training Joshi et al, 2020;Kong et al, 2020;Ye et al, 2020). Others learn to extract relation-aware semantics from text by comparing the sentences that share the same entity pair or distantly supervised relation in KGs (Soares et al, 2019;Peng et al, 2020). However, these methods only consider either individual entities or within-sentence relations, which limits the performance in dealing with multiple entities and relations at document level.…”
Section: Related Workmentioning
confidence: 99%
“…Some focus on obtaining better span representations, including entity mentions, via span-based pre-training Joshi et al, 2020;Kong et al, 2020;Ye et al, 2020). Others learn to extract relation-aware semantics from text by comparing the sentences that share the same entity pair or distantly supervised relation in KGs (Soares et al, 2019;Peng et al, 2020). However, these methods only consider either individual entities or within-sentence relations, which limits the performance in dealing with multiple entities and relations at document level.…”
Section: Related Workmentioning
confidence: 99%
“…The success of supervised relation extraction methods (Bunescu and Mooney, 2005;Qian et al, 2008;Zeng et al, 2014;Zhou et al, 2016;Velikovi et al, 2018) depend heavily on large amount of annotated data. Due to the bottleneck of annotation cost, some weakly-supervied methods are proposed to learn the extraction model using distantly labeled datasets (Mintz et al, 2009;Hoffmann et al, 2011;Lin et al, 2016) or few-shot datasets (Han et al, 2018;Baldini Soares et al, 2019;Peng et al, 2020). However, these paradigms are still limited to predefined relation types from human definition or knowledge bases, which are usually unavailable in the open-domain scenario.…”
Section: Related Workmentioning
confidence: 99%
“…In the NLP field, Dai and Lin (2017) proposed to use contrastive learning for image caption, and Clark et al (2020) trained a discriminative model for language representation learning. Recent literature (Peng et al, 2020) has also attempted to relate the contrastive pre-training scheme to classical supervised RE task. Different from our work, Peng et al (2020) aims to utilize abundant DS data and help classical supervised RE models learn a better relation representation, while our CIL focuses on learning an effective and efficient DSRE model under DS data noise.…”
Section: Sentencementioning
confidence: 99%