2020
DOI: 10.48550/arxiv.2006.03806
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Leveraging the Feature Distribution in Transfer-based Few-Shot Learning

Abstract: Few-shot classification is a challenging problem due to the uncertainty caused by using few labelled samples. In the past few years, transfer-based methods have proved to achieve the best performance, thanks to well-thought-out backbone architectures combined with efficient postprocessing steps. Following this vein, in this paper we propose a transfer-based novel method that builds on two steps: 1) preprocessing the feature vectors so that they become closer to Gaussian-like distributions, and 2) leveraging th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
52
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(52 citation statements)
references
References 29 publications
0
52
0
Order By: Relevance
“…Adopting few-shot image classification as our benchmark [9], we train a neural vision model using a set of background images BI = {(I i , y i )} m i=1 (base set) from K classes with y i ∈ C B = {c 1 , c 2 , . .…”
Section: Proposed Method: Viocementioning
confidence: 99%
“…Adopting few-shot image classification as our benchmark [9], we train a neural vision model using a set of background images BI = {(I i , y i )} m i=1 (base set) from K classes with y i ∈ C B = {c 1 , c 2 , . .…”
Section: Proposed Method: Viocementioning
confidence: 99%
“…Pseudo-labeling based methods and graph-based methods are two main lines of efforts. Pseudo-labeling based methods consist of prototype refinement [13,18,19,26], self-training [17,39] and entropy minimization [2,7]. As for graph-based approaches [14,20,28,41], graph models are constructed to propagate information from labeled data to unlabeled data.…”
Section: Semi-supervised Few-shot Learningmentioning
confidence: 99%
“…Upon Meta-Inc-Baseline, we further generalize to S 2 I-FSL in Section 4.2 and Section 4.3. Since prototype refinement using pseudo labels has been proved effective in semi-supervised methods [13,18,19,26], an intuitive solution to extend Meta-Inc-Baseline to S 2 I-FSL is to refine the novel class weights W n with unlabeled data. Meanwhile, base class weights W b trained on abundant base class samples remain unchanged.…”
Section: Prototype Refinement With Fake Unlabeled Datamentioning
confidence: 99%
“…Bateni et al's proposed Simple CNAPS 14 follows a similar metric-based clustering, however, a Mahalanobis distance is used for comparison between points, rather than propagation of labels. PT+MAP 17 and LaplacianShot 18 function similarly, however, both propose alternative strategies for distance metrics when considering query and support points. AmdimNet 19 and S2M2 20 , alternatively, leverage self-supervised techniques in order to generate a stronger embedding-space mapping for input data.…”
Section: Transductive and Self-supervised Approaches To Few-shot Lear...mentioning
confidence: 99%
“…Technique Backbone Preprocessing Extra Training Data AmdimNet 19 Self-supervised Metric AmdimNet No Yes EPNet 16 Transductive Metric WRN28-10 No Yes SimpleCNAPS 14 Metric ResNet18 No Yes PT+MAP 17 Metric WRN28-10 Yes No LaplacianShot 18 Metric WRN28-10 No No S2M2R 20 Self-supervised Metric WRN28-10 Yes No Reptile 13 Optimization CONV4 No No MAML 12 Optimization CONV4 No No ProtoNet 15 Metric CONV4 No No Table 1. An overview of the differing details between the models trained and tested.…”
Section: Model Evaluation Table Model Namementioning
confidence: 99%