Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.204
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Task Difficulty for Few-Shot Relation Extraction

Abstract: Few-shot relation extraction (FSRE) focuses on recognizing novel relations by learning with merely a handful of annotated instances. Meta-learning has been widely adopted for such a task, which trains on randomly generated few-shot tasks to learn generic data representations. Despite impressive results achieved, existing models still perform suboptimally when handling hard FSRE tasks, where the relations are fine-grained and similar to each other. We argue this is largely because existing models do not disting… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(46 citation statements)
references
References 25 publications
0
19
0
Order By: Relevance
“…The comparable models contain two CNN-based models Proto-HATT (Gao et al, 2019a) and MLMAN (Ye and Ling, 2019), as well as nine BERT-based models BERT-PAIR (Gao et al, 2019b), REGRAB (Qu et al, 2020), TD-proto (Yang et al, 2020), CTEG (Wang et al, 2020), ConceptFERE (Yang et al, 2021), MTB (Baldini Soares et al, 2019), CP (Peng et al, 2020), MapRE (Dong et al, 2021), and HCRP (Han et al, 2021). Since our proposed approach is based on the Prototype Network with BERT, we also compare the Proto-BERT without relation information.…”
Section: Comparable Modelsmentioning
confidence: 99%
“…The comparable models contain two CNN-based models Proto-HATT (Gao et al, 2019a) and MLMAN (Ye and Ling, 2019), as well as nine BERT-based models BERT-PAIR (Gao et al, 2019b), REGRAB (Qu et al, 2020), TD-proto (Yang et al, 2020), CTEG (Wang et al, 2020), ConceptFERE (Yang et al, 2021), MTB (Baldini Soares et al, 2019), CP (Peng et al, 2020), MapRE (Dong et al, 2021), and HCRP (Han et al, 2021). Since our proposed approach is based on the Prototype Network with BERT, we also compare the Proto-BERT without relation information.…”
Section: Comparable Modelsmentioning
confidence: 99%
“…MapRE (Dong et al, 2021) proposed a framework considering both label-agnostic and label-aware semantic mapping information for low resource relation extraction in both pre-training and fine-tuning. HCRP (Han et al, 2021) introduced three modules containing hybrid prototype learning, relationprototype contrastive learning, and task adaptive focal loss for the model improvement. However, these methods always introduced relation information implicitly, which may introduce more parameters and are limited in dealing with outlier samples.…”
Section: Related Workmentioning
confidence: 99%
“…MapRE (Dong et al, 2021) proposed a framework considering both label-agnostic and label-aware semantic mapping information in pre-training and fine-tuning. HCRP (Han et al, 2021) introduced three modules containing hybrid prototype learning, relation-prototype contrastive learning, and task adaptive focal loss for the model improvement. However, most existing methods mainly utilized instances given in each relation class to obtain the prototype representation (generally to average these instances).…”
Section: Introductionmentioning
confidence: 99%
“…The comparable models contain two CNN-based models Proto-HATT (Gao et al, 2019a) and MLMAN (Ye and Ling, 2019), as well as nine BERT-based models BERT-PAIR (? ), REGRAB (Qu et al, 2020), TD-proto (Yang et al, 2020), CTEG (Wang et al, 2020), ConceptFERE (Yang et al, 2021), MTB (Baldini Soares et al, 2019), CP (Peng et al, 2020), MapRE (Dong et al, 2021), and HCRP (Han et al, 2021). Since our proposed approach is based on the Prototype Network with BERT, we also compare the Proto-BERT without relation information.…”
Section: Comparable Modelsmentioning
confidence: 99%
“…CTEG (Wang et al, 2020) proposed a model that learns to decouple high co-occurrence relations, where two types of external information are added. Another intuitive idea is to hope that the model can learn good prototypes or representations , that is, to reduce the distances of the intra-class while widening the ones among different classes (Han et al, 2021;Dong et al, 2021), where However, there are two limitations in how these works introduce relation information. The first is that most of them take implicit constraints, like contrastive learning or relation graphs, instead of the direct fusion, which can be weak facing the remote samples.…”
Section: Introductionmentioning
confidence: 99%