2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.15
|View full text |Cite
|
Sign up to set email alerts
|

On Compressing Deep Models by Low Rank and Sparse Decomposition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
187
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 326 publications
(189 citation statements)
references
References 6 publications
2
187
0
Order By: Relevance
“…Compression Finetuned SVD 2 [34] 2.6x Circulant CNN 2 [7] 3.6x Adaptive Fastfood-16 [34] 3.7x Collins et al [8] 4x Zhou et al [39] 4.3x ACDC [27] 6.3x Network Pruning [14] 9.1x Deep Compression [14] 9.1x GreBdec [38] 10.2x Srinivas et al [30] 10.3x Guo et al [13] 17.9x Binarization ≈32x with interesting areas to explore, such as fast classification and sketch-based image retrieval. Reproducibility: Our implementation can be found on GitHub 1…”
Section: Methodsmentioning
confidence: 99%
“…Compression Finetuned SVD 2 [34] 2.6x Circulant CNN 2 [7] 3.6x Adaptive Fastfood-16 [34] 3.7x Collins et al [8] 4x Zhou et al [39] 4.3x ACDC [27] 6.3x Network Pruning [14] 9.1x Deep Compression [14] 9.1x GreBdec [38] 10.2x Srinivas et al [30] 10.3x Guo et al [13] 17.9x Binarization ≈32x with interesting areas to explore, such as fast classification and sketch-based image retrieval. Reproducibility: Our implementation can be found on GitHub 1…”
Section: Methodsmentioning
confidence: 99%
“…[7,9,22]). Building on the observation that weight matrices are often redundant, another line of research has proposed to use matrix factorization [10,15,35] in order to decompose large weight matrices into factors of smaller matrices before inference.…”
Section: Related Workmentioning
confidence: 99%
“…Apart from pruning, other techniques for CNN acceleration include quantization [10,6], knowledge distillation [16,39], tensor decomposition [11,38] and low-bit arithmetic [35,34]. These methods are complementary and perpendicular to our pruning-based method, so we do not cover these approaches in the experiments, as a common practice in other works [21,20].…”
Section: Related Workmentioning
confidence: 99%
“…Neural network compression and acceleration is an effective solution to this problem. Several neural network compression techniques have been proposed during the past years, for example, knowledge distillation [16,39], tensor decomposition [11,38], quantization [10,6], and low-bit arithmetic [35,34]. Among these techniques, pruning is an important approach.…”
Section: Introductionmentioning
confidence: 99%