2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00948
|View full text |Cite
|
Sign up to set email alerts
|

Dense Classification and Implanting for Few-Shot Learning

Abstract: Training deep neural networks from few examples is a highly challenging and key problem for many computer vision tasks. In this context, we are targeting knowledge transfer from a set with abundant data to other sets with few available examples. We propose two simple and effective solutions: (i) dense classification over feature maps, which for the first time studies local activations in the domain of few-shot learning, and (ii) implanting, that is, attaching new neurons to a previously trained network to lear… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
132
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 191 publications
(134 citation statements)
references
References 37 publications
0
132
0
Order By: Relevance
“…During training phase, tasks were sampled, then loss function of the model was calculated based on the tasks, and the network parameters were updated through the back propagation. However, some works [18][19][20][21] did not follow this setting, but trained classification networks in the way of traditional supervised learning.…”
Section: A Strong Baseline For Few-shot Learningmentioning
confidence: 99%
See 4 more Smart Citations
“…During training phase, tasks were sampled, then loss function of the model was calculated based on the tasks, and the network parameters were updated through the back propagation. However, some works [18][19][20][21] did not follow this setting, but trained classification networks in the way of traditional supervised learning.…”
Section: A Strong Baseline For Few-shot Learningmentioning
confidence: 99%
“…To make fair comparisons with the previous works [13,14,16,17,20], we adopt ResNet-12 [3] as the baseline's backbone and use the training set D base to optimize it in a fully-supervised manner. However, it is interesting to find that a network with only a supervised training strategy can also get superior classification results on novel classes (elaborated in Sect.…”
Section: A Strong Baseline For Few-shot Learningmentioning
confidence: 99%
See 3 more Smart Citations