Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2016
DOI: 10.1145/2939672.2939839
|View full text |Cite
|
Sign up to set email alerts
|

Compressing Convolutional Neural Networks in the Frequency Domain

Abstract: Convolutional neural networks (CNN) are increasingly used in many areas of computer vision. They are particularly attractive because of their ability to "absorb" great quantities of labeled data through millions of parameters. However, as model sizes increase, so do the storage and memory requirements of the classifiers, hindering many applications such as image and speech recognition on mobile phones and other devices. In this paper, we present a novel network architecture, Frequency-Sensitive Hashed Nets (Fr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
104
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 145 publications
(104 citation statements)
references
References 35 publications
(40 reference statements)
0
104
0
Order By: Relevance
“…Much has been done to minimize the memory requirements of neural networks [445,[493][494][495][496]506,507], but there is also growing interest in specialized hardware, such as field-programmable gate arrays (FPGAs) [502,508] and application-specific integrated circuits (ASICs) [509]. Less software is available for such highly specialized hardware [508].…”
Section: Data Limitationsmentioning
confidence: 99%
“…Much has been done to minimize the memory requirements of neural networks [445,[493][494][495][496]506,507], but there is also growing interest in specialized hardware, such as field-programmable gate arrays (FPGAs) [502,508] and application-specific integrated circuits (ASICs) [509]. Less software is available for such highly specialized hardware [508].…”
Section: Data Limitationsmentioning
confidence: 99%
“…One stream focuses on designing efficient network architectures [30,28,15,40,13], including depthwise separable convolution [30], point-wise group convolution with channel shuffling [39], and learned group convolution [15], to name a few. The other line of research explores methods to prune [8,23,11] or quantize [4,8,17] neural network weights. These strategies are effective when neural networks have a substantial amount of redundant weights, which can be safely removed or quantized without sacrificing accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…Extensive efforts have been made to improve the inference efficiency of deep CNNs in recent years. Popular approaches include efficient architecture design [30,28,15,40], network pruning [8,23,26], weight quantiza- * First two authors contributed equally † Corresponding author tion [4,8,17] and adaptive inference [7,14,2,35,6,34]. Among them, adaptive inference is gaining increasing attention recently, due to its remarkable advantages.…”
Section: Introductionmentioning
confidence: 99%
“…Yoon and Hwang enforce sparsity on the filter through regularization. Chen and others also explored the frequency domain for compression . However, all of these methods are more complex when implemented using standard deep‐learning tools, which was actually the reason why we did not choose them as candidates for our approach.…”
Section: Related Workmentioning
confidence: 99%