Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1005
|View full text |Cite
|
Sign up to set email alerts
|

Heterogeneous Supervision for Relation Extraction: A Representation Learning Approach

Abstract: Relation extraction is a fundamental task in information extraction. Most existing methods have heavy reliance on annotations labeled by human experts, which are costly and time-consuming. To overcome this drawback, we propose a novel framework, REHESSION, to conduct relation extractor learning using annotations from heterogeneous information source, e.g., knowledge base and domain heuristics. These annotations, referred as heterogeneous supervision, often conflict with each other, which brings a new challenge… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
47
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 61 publications
(47 citation statements)
references
References 26 publications
0
47
0
Order By: Relevance
“…We use the same dataset 1 published by (Ren et al 2017). The training data are automatically labeled using distant supervision, while 395 sentences are annotated by the author of (Hoffmann et al 2011) (Hoffmann et al 2011) 0.338 0.327 0.333 0.301 0.530 0.380 DS-Joint (Li and Ji 2014) 0.574 0.256 0.354 ---Cotype (Ren et al 2017) 0.423 0.511 0.463 0.311 0.537 0.388 ReHession (Liu et al 2017) 0.412 0.573 0.480 0.367 0.493 0.421 LSTM-CRF (Zheng et al 2017b) 0.693 0.310 0.428 0.596 0.256 0.358 LSTM-LSTM-Bias (Zheng et al 2017b) (Ellis et al 2012). We use the public training data 2 which are automatically labeled using distant supervision and handcrafted patterns by the author of (Liu et al 2017).…”
Section: Experiments Experiments Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…We use the same dataset 1 published by (Ren et al 2017). The training data are automatically labeled using distant supervision, while 395 sentences are annotated by the author of (Hoffmann et al 2011) (Hoffmann et al 2011) 0.338 0.327 0.333 0.301 0.530 0.380 DS-Joint (Li and Ji 2014) 0.574 0.256 0.354 ---Cotype (Ren et al 2017) 0.423 0.511 0.463 0.311 0.537 0.388 ReHession (Liu et al 2017) 0.412 0.573 0.480 0.367 0.493 0.421 LSTM-CRF (Zheng et al 2017b) 0.693 0.310 0.428 0.596 0.256 0.358 LSTM-LSTM-Bias (Zheng et al 2017b) (Ellis et al 2012). We use the public training data 2 which are automatically labeled using distant supervision and handcrafted patterns by the author of (Liu et al 2017).…”
Section: Experiments Experiments Settingsmentioning
confidence: 99%
“…(3) Cotype (Ren et al 2017) learns jointly the representations of entity mentions, relation mentions and type labels; (4) ReHession (Liu et al 2017) employs heterogeneous supervision from both knowledge base and heuristic patterns. (5) LSTM-CRF and LSTM-LSTM-Bias (Zheng et al 2017b), the most related work to our method, converts the joint extraction task to a sequence labeling problem based on a novel tagging scheme.…”
Section: Experiments Experiments Settingsmentioning
confidence: 99%
“…It is actually the task of finding semantic relationships between pairs of entities. Relation extraction is essential for many well-known tasks such as knowledge base completion, question answering, medical science and ontology construction [1]. There are so many unstructured electronic text data available on the web like newspaper, articles, journals, blogs, government and private documents etc.…”
Section: Introductionmentioning
confidence: 99%
“…Aiming to detect and classify the relation between an entity pair in the given sentences, Relation Extraction (RE) plays a vital role in natural language understanding (Etzioni et al 2004;Mintz et al 2009;Liu et al 2017a). The typical methods follow the supervised learning paradigm and require extensive human annotations, which are costly and time-consuming.…”
Section: Introductionmentioning
confidence: 99%