2021 International Joint Conference on Neural Networks (IJCNN) 2021
DOI: 10.1109/ijcnn52387.2021.9534434
|View full text |Cite
|
Sign up to set email alerts
|

Exploit a Multi-head Reference Graph for Semi-supervised Relation Extraction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 26 publications
0
6
0
Order By: Relevance
“…For performance comparison, we select Self-Training [15], Mean-Teacher [25], DualRE [26], RE-Ensemble [26], MRefG [16], MetaSRE [7], GradLRE [8], SelfLRE [27] and UG-MCT [28] as our baselines. The upper bound for our models is the pre-trained model BERT [22] (denoted as BERT w gold labels).…”
Section: Baselinesmentioning
confidence: 99%
See 2 more Smart Citations
“…For performance comparison, we select Self-Training [15], Mean-Teacher [25], DualRE [26], RE-Ensemble [26], MRefG [16], MetaSRE [7], GradLRE [8], SelfLRE [27] and UG-MCT [28] as our baselines. The upper bound for our models is the pre-trained model BERT [22] (denoted as BERT w gold labels).…”
Section: Baselinesmentioning
confidence: 99%
“…The Distant Supervision method [5,6] exploits external knowledge bases(KBs) to annotate unlabeled data and uses them as training data. The Self-Training method [7,8,15,16] assigns pseudo labels and uses them to incrementally improve the model's generalization. In spite of their good results, the existing models ignore the semantics of labels and the information shared between sentences.…”
Section: Low Resource Relation Extractoin Low Resource Relation Extra...mentioning
confidence: 99%
See 1 more Smart Citation
“…We compare REMix with three state-of-the-art models that are representative of the existing class of methods for SSRE: MRefG (Li et al, 2021), MetaSRE (Hu et al, 2021a), and GradLRE (Hu et al, 2021b). MRefG leverages the unlabelled data by semantically or lexically connecting them to labelled data by constructing reference graphs, such as entity reference or verb reference.…”
Section: Baselines and Implementation Detailsmentioning
confidence: 99%
“…Following (Rosenberg, Hebert, and Schneiderman 2005;Tarvainen and Valpola 2017;Lin et al 2019b;Li et al 2021;Hu et al 2021a,b), we adopt F1 score as the evaluation metric and precision and recall as auxiliary metrics. For data settings, we follow (Lin et al 2019b;Hu et al 2021a,b) to divide the training set into labeled and unlabeled sets.…”
Section: Experimental Settingsmentioning
confidence: 99%