2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.205
|View full text |Cite
|
Sign up to set email alerts
|

More is Less: A More Complicated Network with Less Inference Complexity

Abstract: In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity. The core idea is to equip each original convolutional layer with another low-cost collaborative layer (LCCL), and the element-wise multiplication of the ReLU outputs of these two parallel layers produces the layer-wise output. The combined layer is potentially more discriminative than the o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
192
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 245 publications
(193 citation statements)
references
References 26 publications
1
192
0
Order By: Relevance
“…Besides filter pruning methods, we compare the acceleration of our approach with other network compression methods in Table 1. In general, different layer has different importance and sparsity [24], and the method based training [10] can automatically find it. Even though, our approach can outperform other methods.…”
Section: Comparison With Other Compression Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Besides filter pruning methods, we compare the acceleration of our approach with other network compression methods in Table 1. In general, different layer has different importance and sparsity [24], and the method based training [10] can automatically find it. Even though, our approach can outperform other methods.…”
Section: Comparison With Other Compression Methodsmentioning
confidence: 99%
“…We get the baseline accuracy on CIFAR10 and CIFAR100 of 93.55% and 73.23%, respectively. (2) The ResNet-34 model replaces shortcut layer with a 1 × 1 convolutional layer of ResNet-32 [24]. The initial learning rate is set to 0.1, and is divided by 5 at each 60 epochs.…”
Section: Implementation Details and Filter Selection Criteriamentioning
confidence: 99%
“…State-of-the-art techniques on CIFAR-10. We compare IADI against six state-of-the-art techniques including four dynamic skipping techniques (SkipNet [10], Block-Drop [34], SACT and ACT [8]) and two static compression techniques (PFEC [44] and LCCL [45]).…”
Section: B Performance Of the Proposed Iadimentioning
confidence: 99%
“…Following works [19,31,32,33,34,35,36,37,38,39,40,41] [17] and Wasserstein GAN (WGAN) [19] to improve the perceptual quality of the output images generated by VAE and enhance the effectiveness of VAE representations for semi-supervised learning. In addition, a combination of VAE and GAN was also proposed by [42].…”
Section: Generative Adversarial Networkmentioning
confidence: 99%