2018
DOI: 10.48550/arxiv.1805.07722
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Task-Agnostic Meta-Learning for Few-shot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…In Table 1 we also include the results of our own implementation of MAML, which reproduces all results except the 20-way 1-shot Omniglot case. Difficulty in replicating the specific result has also been noted before in Jamal et al (2018). We base our Table 1: MAML++ Omniglot 20-way Few-Shot Results: Our reproduction of MAML appears to be replicating all the results except the 20-way 1-shot results.…”
Section: Resultsmentioning
confidence: 61%
See 1 more Smart Citation
“…In Table 1 we also include the results of our own implementation of MAML, which reproduces all results except the 20-way 1-shot Omniglot case. Difficulty in replicating the specific result has also been noted before in Jamal et al (2018). We base our Table 1: MAML++ Omniglot 20-way Few-Shot Results: Our reproduction of MAML appears to be replicating all the results except the 20-way 1-shot results.…”
Section: Resultsmentioning
confidence: 61%
“…We base our Table 1: MAML++ Omniglot 20-way Few-Shot Results: Our reproduction of MAML appears to be replicating all the results except the 20-way 1-shot results. Other authors have come across this problem as well Jamal et al (2018). We report our own base-lines to provide better relative intuition on how each method impacted the test accuracy of the model.…”
Section: Resultsmentioning
confidence: 99%
“…Furthermore, extended (Finn et al, 2017) by introducing Bayesian mechanisms for fast adaptation and meta-update, quickly obtaining an approximate posterior of a given unseen task, as well a probabilistic framework was developed by . To avoid a biased meta-learner like (Finn et al, 2017), Jamal et al (2018) proposed a task-agnostic meta-learning (TAML) algorithms to train a meta-learner unbiased towards a variety of tasks before its initial model was adapted to unseen tasks. To deal with the long-tail distribution in big data, Wang et al (2017c) introduced a meta-network that learned to progressively transfer meta-knowledge from the head to the tail classes, where meta-knowledge was encoded with a meta-network trained to predict many-shot model parameters from few-shot model parameters.…”
Section: Approach 4: Meta Learningmentioning
confidence: 99%
“…A popular meta-learning strategy consists of finding model initialisations that allow fast adaptation to new, previously unseen tasks [17]. The strategy has since been widely adopted for classification tasks [29] and several recently proposed extensions report increases to efficiency [33] and performance [31,2]. While a natural separation of few-shot tasks exists for image classification problems, in contrast, problems framed as a regression (e.g.…”
Section: Related Workmentioning
confidence: 99%