2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.88
|View full text |Cite
|
Sign up to set email alerts
|

A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation

Abstract: Recently, DNN model compression based on network architecture design, e.g., SqueezeNet, attracted a lot of attention. Compared to well-known models, these extremely compact networks don't show any accuracy drop on image classification. An emerging question, however, is whether these compression techniques hurt DNN's learning ability other than classifying images on a single dataset. Our preliminary experiment shows that these compression methods could degrade domain adaptation (DA) ability, though the classifi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 49 publications
(20 citation statements)
references
References 26 publications
0
20
0
Order By: Relevance
“…Interestingly, many compact deep neural network (DNN) models can still achieve an excellent performance on image classification, such as SquezeNet [21], Compact DNN [22], MobileNetV1 [23], ShuffleNet [24], and MobileNetV2 [25]. Some popular applications of object recognition include gesture recognition [26], handwritten Chinese text recognition [27] and traffic sign classification [28].…”
Section: Compact Designmentioning
confidence: 99%
“…Interestingly, many compact deep neural network (DNN) models can still achieve an excellent performance on image classification, such as SquezeNet [21], Compact DNN [22], MobileNetV1 [23], ShuffleNet [24], and MobileNetV2 [25]. Some popular applications of object recognition include gesture recognition [26], handwritten Chinese text recognition [27] and traffic sign classification [28].…”
Section: Compact Designmentioning
confidence: 99%
“…Various CNNs trained models are studied with different imaging applications [2]. The AlexNet architecture is applied that proved CNNs performed better because of less number training samples.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Accuracy Top-1 Size(MB) AlexNet [16] 57.1 15.0 VGG-16 [24] 71.5 58.9 Googlenet [25] 69.8 23.2 ShuffleNet-2x [33] 70.9 19.2 1.0 MobileNet-224 [12] 70.6 15.9 Compact DNN [28] 68.9 13.6 Gnet-1(ours) 67.0 5.5 Gnet-2(ours) 58.1 2.8 convolutional architecture as VGG, and the FC layers are deployed outside the accelerator. In Section 6, we will show that FC can be implemented within our CNN-DSA accelerator as well.…”
Section: Modelmentioning
confidence: 99%