2020
DOI: 10.1007/978-3-030-58517-4_24
|View full text |Cite
|
Sign up to set email alerts
|

An Ensemble of Epoch-Wise Empirical Bayes for Few-Shot Learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 90 publications
(44 citation statements)
references
References 30 publications
0
37
0
Order By: Relevance
“…• Extensive experiments on two popular standard fewshot classification benchmark datasets-general object dataset mini-ImageNet and fine-grained dataset Caltech-UCSD Birds-200-2011 (CUB), show that the proposed method surpasses some state-of-the-art methods [17]- [21], [28], [34], [37], [44], and validate the feasibility of our model.…”
Section: Introductionmentioning
confidence: 61%
See 1 more Smart Citation
“…• Extensive experiments on two popular standard fewshot classification benchmark datasets-general object dataset mini-ImageNet and fine-grained dataset Caltech-UCSD Birds-200-2011 (CUB), show that the proposed method surpasses some state-of-the-art methods [17]- [21], [28], [34], [37], [44], and validate the feasibility of our model.…”
Section: Introductionmentioning
confidence: 61%
“…We implement experiments on mini-ImageNet and CUB datasets, and compare our model with a series of current prevailing models, including Matching Networks [10], MAML [13], Prototypical Networks [17], Relation Networks [18], TADAM [19], MetaOptNet [37], Baseline++ [20], Meta-Baseline [21], DSN [34], E 3 BM [44], Neg-Cosine [28]. The experimental results are listed in Table 2 and Table 3, respectively.…”
Section: Comparison With the State-of-the-artsmentioning
confidence: 99%
“…">Memory network methods (i.e., Meta Networks, 103 TADAM, 104 MCFS, 105 and MRN 106 ) learn to store “experience” when learning seen tasks and then generalize it to unseen tasks. Gradient descent‐based meta‐learning methods (i.e., MAML, 35 Meta‐LSTM, 107 MetaGAN, 42 LEO, 108 LGM‐Net, 109 CTM, 110 MetaOptNet, 111 SIB + E3BM, 141 and LSBC 112 ) intend for adjusting the optimization algorithm so that the model can converge within a small number of optimization steps (with a few examples).…”
Section: Methodsmentioning
confidence: 99%
“…Gradient descent‐based meta‐learning methods (i.e., MAML, 35 Meta‐LSTM, 107 MetaGAN, 42 LEO, 108 LGM‐Net, 109 CTM, 110 MetaOptNet, 111 SIB + E3BM, 141 and LSBC 112 ) intend for adjusting the optimization algorithm so that the model can converge within a small number of optimization steps (with a few examples).…”
Section: Methodsmentioning
confidence: 99%
“…Few-shot learning aims to learn to generalize to new categories with a few labeled samples in each class. Current few-shot methods mainly include optimization-based methods [12,23,32,39,45,46,55] and metric-based methods [13,19,44,49,52,58,57,53]. Optimization-based methods can achieve fast adaptation to new tasks with limited samples by learning a specific optimization algorithm.…”
Section: Related Workmentioning
confidence: 99%