2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00891
|View full text |Cite
|
Sign up to set email alerts
|

Mixture-based Feature Space Learning for Few-shot Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 55 publications
(19 citation statements)
references
References 41 publications
0
19
0
Order By: Relevance
“…However, our method still achieves the new SOTA results on both of these two benchmarks. ResNet-12 47.76 ± 0.77 65.30 ± 0.76 BML [84] ResNet-12 45.00 ± 0.41 63.03 ± 0.41 ALFA+MeTAL [3] ResNet-12 44.54 ± 0.50 58.44 ± 0.42 MixtFSL [1] ResNet-12 41.50 ± 0.67 58.39 ± 0.62 PAL [41] ResNet-12 47.20 ± 0.60 64.00 ± 0.60 TPMN [67] ResNet-12 46.93 ± 0.71 63.26 ± 0.74 MN + MC [79] ResNet 3. Comparison with the state-of-the-art 5-way 1-shot and 5way 5-shot performance with 95% confidence intervals on FC100.…”
Section: Comparisons With State-of-the-art Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, our method still achieves the new SOTA results on both of these two benchmarks. ResNet-12 47.76 ± 0.77 65.30 ± 0.76 BML [84] ResNet-12 45.00 ± 0.41 63.03 ± 0.41 ALFA+MeTAL [3] ResNet-12 44.54 ± 0.50 58.44 ± 0.42 MixtFSL [1] ResNet-12 41.50 ± 0.67 58.39 ± 0.62 PAL [41] ResNet-12 47.20 ± 0.60 64.00 ± 0.60 TPMN [67] ResNet-12 46.93 ± 0.71 63.26 ± 0.74 MN + MC [79] ResNet 3. Comparison with the state-of-the-art 5-way 1-shot and 5way 5-shot performance with 95% confidence intervals on FC100.…”
Section: Comparisons With State-of-the-art Resultsmentioning
confidence: 99%
“…At the same time, these surrogates need to be learnt at the same time. Supposing the supervised learning objective of the student is L surr , then the parameters θ s of the student and its associated surrogates can be updated with the following equations In between every two transformer sets, a spectral tokens pooling layer is used to down-sample the patch token number by 1 2 for information aggregation.…”
Section: Jointly Attribute Surrogates and Parameters Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Transfer learning. These methods [5,13,3,12,8,21,1,23,7,2,16,36] take advantage of the standard transfer learning pipeline, which first pretrain a model on base classes, then revise the feature embeddings output by the pretrained model with limited novel samples. [5] used cosine classifier to normalize the magnitude of both embeddings and classification weights for compacting the intra-class intensity.…”
Section: Related Workmentioning
confidence: 99%
“…Naturally, fewshot learning [35] aims at learning from scarce data, which is already studied long before the deep learning era. In this paper, we focus on the image recognition task, also known as few-shot image classification, a widely studied few-shot task [33,29,9,30,5,21,1,23,7,2,36]. Deep learning techniques have further pushed few-shot learning's average accuracy over multiple runs (i.e., episodes) towards a high level that appears to be already applicable to real-world applications.…”
Section: Introductionmentioning
confidence: 99%