2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.561
|View full text |Cite
|
Sign up to set email alerts
|

MuCaLe-Net: Multi Categorical-Level Networks to Generate More Discriminating Features

Abstract: In a transfer-learning scheme, the intermediate layers of a pre-trained CNN are employed as universal image representation to tackle many visual classification problems. The current trend to generate such representation is to learn a CNN on a large set of images labeled among the most specific categories. Such processes ignore potential relations between categories, as well as the categorical-levels used by humans to classify. In this paper, we propose Multi Categorical-Level Networks (MuCaLe-Net) that include… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(19 citation statements)
references
References 28 publications
0
19
0
Order By: Relevance
“…iCaRL halves this score while our best configurations with DF E DF E DF E based on 100 and 1000 classes lose only 22 and 12 points respectively. The gap could probably be further reduced if the feature extractors were more universal [10,11]. This could, for instance, be achieved if DeeSIL's initial training would be done with an even larger number of classes.…”
Section: Evaluation and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…iCaRL halves this score while our best configurations with DF E DF E DF E based on 100 and 1000 classes lose only 22 and 12 points respectively. The gap could probably be further reduced if the feature extractors were more universal [10,11]. This could, for instance, be achieved if DeeSIL's initial training would be done with an even larger number of classes.…”
Section: Evaluation and Discussionmentioning
confidence: 99%
“…One thousand classes are selected to form a diversified subset of ImageNet and thus increase universality (i.e. optimize their transferability toward new tasks) [10,11]. -F L1000 -train with a more challenging dataset which is obtained from weakly annotated Flickr group data and is visually more distant from the test set.…”
Section: Methodsmentioning
confidence: 99%
“…Bias when fine-tuning pre-trained units Here our goal is to highlight that as in (Zhou et al, 2018a), pre-trained units can be biased in the standard fine-tuning scheme. To do so, we follow (Tamaazousti et al, 2017) and analyse the units of Φ (biLSTM layer) before and after finetuning. Specifically, we compute the Pearson's correlation between all the units of the layer before and after fine-tuning.…”
Section: Discussionmentioning
confidence: 99%
“…Although this similarity is not easy to formalize, one has the intuition that the closer the both tasks the better the representation will be adapted to the target-task. This consideration leads to several methods that tend to obtain more universal representations [2,9,19,29,35,37], that is to say that are more adapted to a large set of diverse target-tasks, in a transfer-learning scenario.…”
Section: Introductionmentioning
confidence: 99%
“…All these approaches vary the problem by creating new categories having an existing label. However, most of them studied the effect of adding categories extracted from ImageNet, either generic categories [20,23,35,37] or specific ones [1,2,29,46], that are at the bottom of a hierarchy such as ImageNet, except [18,38,39] that use web annotations with noisy labels. In general, the usage of specific categories tends to provide better performances than generic ones [2,29], although combining them can significantly boost the universalizing capacity of the CNN [35,37].…”
Section: Introductionmentioning
confidence: 99%