Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1021
|View full text |Cite
|
Sign up to set email alerts
|

Open Relation Extraction: Relational Knowledge Transfer from Supervised Data to Unsupervised Data

Abstract: Open relation extraction (OpenRE) aims to extract relational facts from the open-domain corpus. To this end, it discovers relation patterns between named entities and then clusters those semantically equivalent patterns into a united relation cluster. Most OpenRE methods typically confine themselves to unsupervised paradigms, without taking advantage of existing relational facts in knowledge bases (KBs) and their high-quality labeled instances. To address this issue, we propose Relational Siamese Networks (RSN… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 43 publications
(37 citation statements)
references
References 16 publications
0
37
0
Order By: Relevance
“…There have been many efforts for open relation learning, including pattern extraction (Banko et al, 2007;Fader et al, 2011;Mausam et al, 2012;Del Corro and Gemulla, 2013;Angeli et al, 2015;Petroni et al, 2015;Stanovsky and Dagan, 2016;Mausam, 2016;Cui et al, 2018), relation discovery (Yao et al, 2011;Marcheggiani and Titov, 2016), relation clustering (Shinyama and Sekine, 2006;Elsahar et al, 2017;Wu et al, 2019), and data collection (Riloff et al, 1999;Etzioni et al, 2005;Pantel and Pennacchiotti, 2006;Rozenfeld and Feldman, 2008;Nakashole et al, 2011;Zhu et al, 2009;Gao et al, 2020). However, for continual relation learning, there are still only some preliminary explorations for it.…”
Section: Related Workmentioning
confidence: 99%
“…There have been many efforts for open relation learning, including pattern extraction (Banko et al, 2007;Fader et al, 2011;Mausam et al, 2012;Del Corro and Gemulla, 2013;Angeli et al, 2015;Petroni et al, 2015;Stanovsky and Dagan, 2016;Mausam, 2016;Cui et al, 2018), relation discovery (Yao et al, 2011;Marcheggiani and Titov, 2016), relation clustering (Shinyama and Sekine, 2006;Elsahar et al, 2017;Wu et al, 2019), and data collection (Riloff et al, 1999;Etzioni et al, 2005;Pantel and Pennacchiotti, 2006;Rozenfeld and Feldman, 2008;Nakashole et al, 2011;Zhu et al, 2009;Gao et al, 2020). However, for continual relation learning, there are still only some preliminary explorations for it.…”
Section: Related Workmentioning
confidence: 99%
“…CNN Following [10], we take the CNN encoder as our first choice, which includes an embedding layer followed by a one-dimensional convolutional layer and a max-pooling layer. The model setting we used is the same as [10].…”
Section: Neural Encodersmentioning
confidence: 99%
“…Different from most other datasets, the entity pair of each instance in FewRel is unique, which makes the model unable to obtain shortcuts by memorizing the entities. Following [10], we choose 64 relations as the train set and randomly select 16 relations with 1600 instances as the test set; the remaining sentences are validation set. The other one is NYT+FB [19][20], which is built by distant supervision.…”
Section: Experiments Settingmentioning
confidence: 99%
See 1 more Smart Citation
“…This method naturally makes good use of the context, dynamically produces word vector representations, and trains MLM such as denoising targets on large-scale corpora. The resulting representation is very helpful for downstream tasks and solves the polysemy problem [10] [11].…”
Section: Introductionmentioning
confidence: 99%