2015 IEEE International Conference on Computer Vision (ICCV) 2015
DOI: 10.1109/iccv.2015.314
|View full text |Cite
|
Sign up to set email alerts
|

HD-CNN: Hierarchical Deep Convolutional Neural Networks for Large Scale Visual Recognition

Abstract: In image classification, visual separability between different object categories is highly uneven, and some categories are more difficult to distinguish than others. Such difficult categories demand more dedicated classifiers. However, existing deep convolutional neural networks (CNN) are trained as flat N-way classifiers, and few efforts have been made to leverage the hierarchical structure of categories. In this paper, we introduce hierarchical deep CNNs (HD-CNNs) by embedding deep CNNs into a category hiera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
402
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 528 publications
(473 citation statements)
references
References 26 publications
3
402
0
1
Order By: Relevance
“…A principally similar work named hierarchical deep CNN (HD-CNN) [50] has studied the effectiveness of a tree-structured CNN ensemble on general object classification problems involving dozens of classes. However, with only a few classes, it is inconvenient to build multi-level class taxonomy, and appending a near-full-sized CNN structure might not be sufficiently cost-efficient considering the size of the problem.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…A principally similar work named hierarchical deep CNN (HD-CNN) [50] has studied the effectiveness of a tree-structured CNN ensemble on general object classification problems involving dozens of classes. However, with only a few classes, it is inconvenient to build multi-level class taxonomy, and appending a near-full-sized CNN structure might not be sufficiently cost-efficient considering the size of the problem.…”
Section: Discussionmentioning
confidence: 99%
“…Different from its ancestor AlexNet [71], VGG-M uses small kernels of size 3 × 3 with 1 pixel-sized padding, making them ideal for encoding the local structural differences. After that, the development on CNN have either sought greater depth by shortcut connection [76][77][78] or more miscellaneous structural complexities [21,50,79].…”
Section: The Baseline Network Structure and Extension Styles For Analmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, one can not only build identifiers directly based on attributes [10], but also efficiently construct highly flexible large-scale hierarchical datasets, which can further benefit image classification and attributeto-image generation [23,22].…”
Section: Classi Ermentioning
confidence: 99%
“…For example, Salakhutdinov et al [33] presented a hierarchical classification model to share features between categories, and boosted the classification performance for objects with few training examples. Yan et al [49] presented a hierarchical deep CNN (HD-CNN) that consists of a coarse component trained over all classes as well as several fine components trained over subsets of classes. Instead of utilizing a fixed architecture for classification, Murdock et al [29] proposed a regularization method, i.e.…”
Section: Hierarchical Models For Image Classificationmentioning
confidence: 99%