2021
DOI: 10.1007/978-3-030-86340-1_39
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging the Feature Distribution in Transfer-Based Few-Shot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
110
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 100 publications
(114 citation statements)
references
References 15 publications
4
110
0
Order By: Relevance
“…SSR [44] 72.40 ± 0.60 80.20 ± 0.40 fine-tuning(train+val) [49] 68.11 ± 0.69 80.36 ± 0.50 SIB+E 3 BM [50] 71.40 81.20 LR+DC [17] 68.57 ± 0.55 82.88 ± 0.42 EPNet [31] 70.74 ± 0.85 84.34 ± 0.53 TIM-GD [42] 77.80 87.40 PT+MAP [51] 82.92 ± 0.26 88.82 ± 0.13 iLPC [45] 83.05 ± 0.79 88.82 ± 0.42 ODC [43] 80.64 ± 0.34 89.39 ± 0.39 PEMnE-BMS * [32] 83. 35…”
Section: Table IV 1-shot and 5-shot Accuracy Of State-of-the-art Meth...mentioning
confidence: 99%
See 1 more Smart Citation
“…SSR [44] 72.40 ± 0.60 80.20 ± 0.40 fine-tuning(train+val) [49] 68.11 ± 0.69 80.36 ± 0.50 SIB+E 3 BM [50] 71.40 81.20 LR+DC [17] 68.57 ± 0.55 82.88 ± 0.42 EPNet [31] 70.74 ± 0.85 84.34 ± 0.53 TIM-GD [42] 77.80 87.40 PT+MAP [51] 82.92 ± 0.26 88.82 ± 0.13 iLPC [45] 83.05 ± 0.79 88.82 ± 0.42 ODC [43] 80.64 ± 0.34 89.39 ± 0.39 PEMnE-BMS * [32] 83. 35…”
Section: Table IV 1-shot and 5-shot Accuracy Of State-of-the-art Meth...mentioning
confidence: 99%
“…PT+MAP [51] 85.67 ± 0.26 90.45 ± 0.14 TIM-GD [42] 79.90 88.50 ODC [43] 83.73 ± 0.36 90.46 ± 0.46 SSR [44] 81.20 ± 0.60 85.70 ± 0.40 Rot+KD+POODLE [48] 79.67 86.96 DPGN [46] 72.45 ± 0.51 87.24 ± 0.39 EPNet [31] 76.53 ± 0.87 87.32 ± 0.64 ECKPN [47] 73.59 ± 0.45 88.13 ± 0.28 iLPC [45] 83.49 ± 0.88 89.48 ± 0.47 ASY ResNet12 (ours)…”
Section: Table IV 1-shot and 5-shot Accuracy Of State-of-the-art Meth...mentioning
confidence: 99%
“…Compositional Feature Transformation. It is believed that FSL algorithms favor features with more Gaussian-like distributions, and thus various kinds of transformations are used to improve the normality of feature distribution, including power transformation (Hu et al, 2021), Tukey's Ladder of Powers Transformation (Yang et al, 2021), and L2 normalization (Wang et al, 2019). While these transformations are normally used independently, here we propose to combine several transformations sequentially in order to enlarge the expressivity of transformation function and to increase the polymorphism of the FSL process.…”
Section: Deepvoro: Integrating Multi-level Heterogeneity Of Fslmentioning
confidence: 99%
“…(3) We show that the proposed method can bring large increase in accuracy with a variety of feature extractors and datasets, leading to state-of-the-art results in the considered benchmarks. This work is an extended version of [9], with the main difference that here we consider the broader case where we do not know the proportion of samples belonging to each considered class in the case of transductive few-shot, leading to a new algorithm called Boosted Min-size Sinkhorn. We also propose more efficient preprocessing steps, leading to overall better performance in both inductive and transductive settings.…”
Section: Contributions Let Us Highlight the Main Contributions Of Thi...mentioning
confidence: 99%
“…Namely, the model learns a set of initialization parameters that are in an advantageous position for the model to adapt to a new (small) dataset. Recently, the trend evolved towards using well-thought-out transfer architectures (called backbones) [4,5,6,7,8,9] trained one time on the same training data, but seen as a unique large dataset.…”
Section: Introductionmentioning
confidence: 99%