2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00049
|View full text |Cite
|
Sign up to set email alerts
|

Meta-Transfer Learning for Few-Shot Learning

Abstract: Meta-learning has been proposed as a framework to address the challenging few-shot learning setting. The key idea is to leverage a large number of similar few-shot tasks in order to learn how to adapt a base-learner to a new task for which only a few labeled samples are available. As deep neural networks (DNNs) tend to overfit using a few samples only, meta-learning typically uses shallow neural networks (SNNs), thus limiting its effectiveness. In this paper we propose a novel few-shot learning method called m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
495
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 981 publications
(549 citation statements)
references
References 33 publications
1
495
0
2
Order By: Relevance
“…Modulo empirical fluctuations, our method performs at the state-of-the art and in some cases exceeds it. We wish to point out that SNAIL [9], TADAM [10,18], LEO [14], MTLF [18] pre-train the network for a 64 way classification task on miniImagenet and 351 way classification on tieredImagenet. However, all the models trained for our method are trained from scratch and use no form of pre-training.…”
Section: Comparison To the State-of-the-artmentioning
confidence: 99%
“…Modulo empirical fluctuations, our method performs at the state-of-the art and in some cases exceeds it. We wish to point out that SNAIL [9], TADAM [10,18], LEO [14], MTLF [18] pre-train the network for a 64 way classification task on miniImagenet and 351 way classification on tieredImagenet. However, all the models trained for our method are trained from scratch and use no form of pre-training.…”
Section: Comparison To the State-of-the-artmentioning
confidence: 99%
“…Few-shot learning methods can be roughly categorized into two classes: data augmentation and task-based meta-learning. For example, in [15] the proposed model gave state-of-the-art results and paved the path for more sophisticated meta-transfer learning methods.…”
Section: Discussion 41 Current Challengesmentioning
confidence: 96%
“…Adaptive batch normalization [16] performs domain adaptation for the segmentation task by replacing the statistics of the source domain with those of the target domain, achieving performance accuracy competitive with other deep-learning based methods. Meta-transfer learning [31] performs few-shot learning by updating only the scale and shift parameters, and achieves better performance than when fine-tuning all kernel parameters. These methods show that the scaling and shifting operation is effective for transferring knowledge to feature extractor models.…”
Section: Transfer Learning With Scale and Shiftmentioning
confidence: 99%