Findings of the Association for Computational Linguistics: NAACL 2022 2022
DOI: 10.18653/v1/2022.findings-naacl.139
|View full text |Cite
|
Sign up to set email alerts
|

Learn from Relation Information: Towards Prototype Representation Rectification for Few-Shot Relation Extraction

Abstract: Few-shot Relation Extraction refers to fast adaptation to novel relation classes with few samples through training on the known relation classes. Most existing methods focus on implicitly introducing relation information (i.e., relation label or relation description) to constrain the prototype representation learning, such as contrastive learning, graphs, and specifically designed attentions, which may bring useless and even harmful parameters. Besides, these approaches are limited in handing outlier samples f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 15 publications
0
1
0
Order By: Relevance
“…[28] employs instruction finetuning to comprehend and process task-specific instructions, including both main and auxiliary tasks. The PRM approach relies on human-generated descriptions that necessitate expert input and effort, proving less adaptable to new classes [29]. [30] extracts entity type-related features based on mutual information criteria and generates a unique prompt for each unseen example by selecting relevant entity type-related features.…”
Section: Related Workmentioning
confidence: 99%
“…[28] employs instruction finetuning to comprehend and process task-specific instructions, including both main and auxiliary tasks. The PRM approach relies on human-generated descriptions that necessitate expert input and effort, proving less adaptable to new classes [29]. [30] extracts entity type-related features based on mutual information criteria and generates a unique prompt for each unseen example by selecting relevant entity type-related features.…”
Section: Related Workmentioning
confidence: 99%
“…Prototype networks are the dominant direction for FSRE tasks due to their efficiency. The following are some of the more representative models in FSRE using the prototype network approach: Proto-HATT [17] improves model robustness and accelerates model convergence by designing instance-level and featurelevel attention schemes; MLMAN [18] encodes the instances in the query set and support set and leverages the interaction of local and instance-level information between them to obtain richer semantic representation; REGRAB [44] incorporates global relation graphs with Bayesian meta-learning into the model; TD-proto [45] introduces relation and entity descriptions to enhance prototype networks; ConceptFERE [23] incorporates an entity multiconcept selection module to enhance key entity concept features; CTEG [46] finetuning by assigning pseudolabels to untagged data using the domain; HCRP [20] introduce three modules for hybrid prototype learning, contrast learning, and adaptive focus loss to improve the model; SimpleFSRE [21] simplifies the model using a direct join method of relation information; PRM [22] proposes a parameter-free prototype correction method; and MapRE [47] adds a framework that takes into account label uncertainty and labelaware semantic mapping information. However, these models have inferior domain adaptation capabilities.…”
Section: Few-shot Relation Extractionmentioning
confidence: 99%
“…All [17][18][19] try to fully use the valuable information in the dataset to achieve a more informative prototype network. Recent FSRE research efforts introduce external information such as relation information (description of relation information) [20][21][22] and entity concepts [23], thus effectively improving the performance of FSRE tasks. However, two problems remain: (1) The current work is overly dependent on external information and does not pay sufficient attention to the information already available in the dataset [24].…”
Section: Introductionmentioning
confidence: 99%
“…SimpleFSRE [18] improves model performance by concatenating two representations of relationship information and directly incorporating them into the prototype representation. PRM [19] combines a gating mechanism to utilize relationship description information, determining the degree of preservation and an update of both the prototype and relationship information. CBPM [20] corrects the prototype network by utilizing category information from the query set and hierarchical information from relationship synonyms.…”
Section: Introductionmentioning
confidence: 99%