2018
DOI: 10.48550/arxiv.1808.08558
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Spectral Pruning: Compressing Deep Neural Networks via Spectral Analysis and its Generalization Error

Abstract: The model size of deep neural network is getting larger and larger to realize superior performance in complicated tasks. This makes it difficult to implement deep neural network in small edge-computing devices. To overcome this problem, model compression methods have been gathering much attention. However, there have been only few theoretical back-grounds that explain what kind of quantity determines the compression ability. To resolve this issue, we develop a new theoretical frame-work for model compression, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(11 citation statements)
references
References 38 publications
0
11
0
Order By: Relevance
“…The main difficulty in generalization error analysis of deep learning is that the Rademacher complexity of the full model F is quite large. One of the successful approaches to avoid this difficulty is the compression based bound [1,7,48] which measures how much the trained network f can be compressed. If the network can be compressed to much smaller one, then its intrinsic dimensionality can be regarded as small.…”
Section: Preliminaries: Problem Formulation and Notationsmentioning
confidence: 99%
See 4 more Smart Citations
“…The main difficulty in generalization error analysis of deep learning is that the Rademacher complexity of the full model F is quite large. One of the successful approaches to avoid this difficulty is the compression based bound [1,7,48] which measures how much the trained network f can be compressed. If the network can be compressed to much smaller one, then its intrinsic dimensionality can be regarded as small.…”
Section: Preliminaries: Problem Formulation and Notationsmentioning
confidence: 99%
“…The refined version is given in Appendix A. This bound is general, and can be combined with the compression bounds derived so far such as [1,7,48] where the complexity of G and the bias r are analyzed for their generalization error bounds.…”
Section: Compression Bound For Noncompressed Networkmentioning
confidence: 99%
See 3 more Smart Citations