2020
DOI: 10.48550/arxiv.2009.08445
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Self-Supervised Meta-Learning for Few-Shot Natural Language Classification Tasks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 0 publications
0
8
0
Order By: Relevance
“…The detailed results of the 12 NLI tasks, their standard errors, as well as the hyperparameter selections are all included in the Appendix. We also include the SOTA results from [Bansal et al, 2020] for comparison and note that PACMAML is consistently the best performer over all three few-shot settings k = 4, 8, 16. In comparison, MAML and BMAML perform worse, possibly due to sensitivity to learning rates, as suggested by [Bansal et al, 2019].…”
Section: Few-shot Classification Problemsmentioning
confidence: 99%
“…The detailed results of the 12 NLI tasks, their standard errors, as well as the hyperparameter selections are all included in the Appendix. We also include the SOTA results from [Bansal et al, 2020] for comparison and note that PACMAML is consistently the best performer over all three few-shot settings k = 4, 8, 16. In comparison, MAML and BMAML perform worse, possibly due to sensitivity to learning rates, as suggested by [Bansal et al, 2019].…”
Section: Few-shot Classification Problemsmentioning
confidence: 99%
“…BERT or RoBERTa). However, we did not adopt these approaches for the following reasons: (1) We observed that the perturbed examples from such adversarial methods are often unnatural and not readable by humans. (2) Both adversarial perturbation and selection require a reference model, which violates our model-agnostic task formulation principle in Table 2.…”
Section: Sampling Of Training and Test Datamentioning
confidence: 99%
“…Another line of work explored semi-supervised learning; where unlabeled data, alongside usually small amounts of labeled data, is used for learning [35,37,15,17]. Recent studies have also explored meta-learning in NLU where the models have access to data from many training tasks to learn from, and evaluate the few-shot learning ability on unseen test tasks [4,1,20]. In this work, we do not address the meta-learning setting [40].…”
Section: Introductionmentioning
confidence: 99%
“…Prototype-based methods recently become popular few-shot learning approaches in machine learning community. It was firstly studied in the context of image classification (Vinyals et al, 2016;Sung et al, 2018;Zhao et al, 2020), and has recently been adapted to different NLP tasks such as text classification Geng et al, 2019;Bansal et al, 2020), machine translation (Gu et al, 2018) and relation classification (Han et al, 2018).…”
Section: Self-trainingmentioning
confidence: 99%