2020
DOI: 10.1109/access.2020.3015714
|View full text |Cite
|
Sign up to set email alerts
|

Channel Compression: Rethinking Information Redundancy Among Channels in CNN Architecture

Abstract: Model compression and acceleration are attracting increasing attentions due to the demand for embedded devices and mobile applications. Research on efficient convolutional neural networks (CNNs) aims at removing feature redundancy by decomposing or optimizing the convolutional calculation. In this work, feature redundancy is assumed to exist among channels in CNN architectures, which provides some leeway to boost calculation efficiency. Aiming at channel compression, a novel convolutional construction named co… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…In Figure 12, the recognition results of several comparison models are shown. The different colors indicate CRNN (Convolutional and Recurrent Neural Network) (Hu et al, 2023), DBNs (Deep Belief Networks) (Yang et al, 2018), Swin-Transformer (Chen et al, 2022), SAEs (Sparse Autoencoders) (Ke et al, 2018), and MobileNet (Mobile Network) (Liang et al, 2020) respectively. The primary parameters for the comparative models are comprehensively listed in Table 3.…”
Section: Figurementioning
confidence: 99%
“…In Figure 12, the recognition results of several comparison models are shown. The different colors indicate CRNN (Convolutional and Recurrent Neural Network) (Hu et al, 2023), DBNs (Deep Belief Networks) (Yang et al, 2018), Swin-Transformer (Chen et al, 2022), SAEs (Sparse Autoencoders) (Ke et al, 2018), and MobileNet (Mobile Network) (Liang et al, 2020) respectively. The primary parameters for the comparative models are comprehensively listed in Table 3.…”
Section: Figurementioning
confidence: 99%
“…Recent works on ASC performed the popular low-level feature extraction methods such as Log-Mel scale [8,21,24,28,36], Constant-Q transform spectrograms [3]. While Mel spectrogram feature uses as a most used method for acoustic signal process, several pre-processing techniques are also performed various aspects of an acoustic scene.…”
Section: Related Workmentioning
confidence: 99%
“…The second is to transfer the knowledge from large-scale pre-trained model to a small model via knowledge distillation [13,14,15]. The last one is to directly exploit efficient networks for audio classification, such as MobileNets [7,16]. In summary, these methods mainly focus on reducing model size.…”
Section: Introductionmentioning
confidence: 99%