2018
DOI: 10.1109/tit.2017.2756880
|View full text |Cite
|
Sign up to set email alerts
|

Energy Propagation in Deep Convolutional Neural Networks

Abstract: Many practical machine learning tasks employ very deep convolutional neural networks. Such large depths pose formidable computational challenges in training and operating the network. It is therefore important to understand how fast the energy contained in the propagated signals (a.k.a. feature maps) decays across layers. In addition, it is desirable that the feature extractor generated by the network be informative in the sense of the only signal mapping to the all-zeros feature vector being the zero input si… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
22
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(25 citation statements)
references
References 48 publications
3
22
0
Order By: Relevance
“…The first equality in (20) clearly follows from (30) and (31). The second equality in (20) is an immediate consequence of the first equality and the observation that for…”
Section: Proof Of Theorem 31mentioning
confidence: 80%
See 1 more Smart Citation
“…The first equality in (20) clearly follows from (30) and (31). The second equality in (20) is an immediate consequence of the first equality and the observation that for…”
Section: Proof Of Theorem 31mentioning
confidence: 80%
“…Numerical results illustrating the energy decay in the Euclidean domain are given in [19]. Furthermore, theoretical rates are provided in [20] and [21], where [20] introduces additional assumptions on the smoothness of input signals and the bandwidth of filters and [21] studies time-frequency frames instead of wavelets.…”
Section: Energy Preservationmentioning
confidence: 99%
“…see [5]). More recently, interesting results have focused on the expressive ability of deeper and sparser architectures [48,9,27,52]. Computational tractability of training networks however is still a major challenge.…”
Section: Related Workmentioning
confidence: 99%
“…Remark 3.1. It has been observed in [19,22,24,25] that in the context of scattering networks, most of the input signal's energy is contained in the output of the first two convolutional layers. While the context and the filters here are different, this observation might be interesting also as a background for the usual choice of architecture of CNNs for audio processing.…”
Section: The Structure Of Cnnsmentioning
confidence: 99%