Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2014
DOI: 10.3115/v1/p14-1076
|View full text |Cite
|
Sign up to set email alerts
|

Robust Domain Adaptation for Relation Extraction via Clustering Consistency

Abstract: We propose a two-phase framework to adapt existing relation extraction classifiers to extract relations for new target domains. We address two challenges: negative transfer when knowledge in source domains is used without considering the differences in relation distributions; and lack of adequate labeled samples for rarer relations in the new domain, due to a small labeled data set and imbalance relation distributions. Our framework leverages on both labeled and unlabeled data in the target domain. First, we d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 18 publications
(13 reference statements)
0
2
0
Order By: Relevance
“…The unsupervised approach mainly regards the relation extraction task as the clustering problem (Das et al , 2019; Min et al , 2012; Nguyen et al , 2014). For clustering, a huge number of entity pairs would firstly be chosen and the similarities based on the intermediate context of the entity would be measured.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The unsupervised approach mainly regards the relation extraction task as the clustering problem (Das et al , 2019; Min et al , 2012; Nguyen et al , 2014). For clustering, a huge number of entity pairs would firstly be chosen and the similarities based on the intermediate context of the entity would be measured.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Multiple labels from single corpora: For ChemProt corpora we consider various labels as different sources following Nguyen et al, (2014) The five positive labels of ChemProt are: CPR: 3, CPR: 4, CPR: 5, CPR: 6, CPR: 9 which stand for upregulator, downregulator, agonist, antagonist and substrate, respectively. We predict the classification performance for unlabeled targets CPR:6 and CPR:9 taking multi-source labeled input denoted as 3C from three sources-CPR: 3, CPR: 4, CPR: 5 as positive instances and remaining as negative.…”
Section: 2 Multi-source Single Target (Msst)mentioning
confidence: 99%