2022
DOI: 10.1109/tcsi.2021.3120312
|View full text |Cite
|
Sign up to set email alerts
|

Memory-Efficient CNN Accelerator Based on Interlayer Feature Map Compression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 20 publications
(9 citation statements)
references
References 34 publications
0
9
0
Order By: Relevance
“…This work analyzed the compression ratio of each fused layer and provided the overall compression ratio of each CNN. A fused layer is a convolution layer, a batch norm layer, an activation layer, and a pooling layer if the network has [39]. The results proved that our method can not only reduce the number of data interactions to on-chip storage and off-chip memory but also can effectively reduce the required on-chip storage area while ensuring that the average accuracy loss is around 0.6%.…”
Section: Experiments Resultsmentioning
confidence: 85%
See 3 more Smart Citations
“…This work analyzed the compression ratio of each fused layer and provided the overall compression ratio of each CNN. A fused layer is a convolution layer, a batch norm layer, an activation layer, and a pooling layer if the network has [39]. The results proved that our method can not only reduce the number of data interactions to on-chip storage and off-chip memory but also can effectively reduce the required on-chip storage area while ensuring that the average accuracy loss is around 0.6%.…”
Section: Experiments Resultsmentioning
confidence: 85%
“…In Table III, the proposed method is compared with some state-of-the-art compression methods. This work has a better compression ratio and network accuracy in VGG-16, ResNet-50, and MobileNet-v2 compared to the methods proposed in [15,39]. In terms of hardware implementation, the hardware reported in [15] integrates the entire process of PCA, resulting large area overhead.…”
Section: ) Compact Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…For a CNN-based SE algorithm to be suitably implemented in memory, power, and computation speed-constrained systems such as hearing devices, significant model compression and acceleration are required. Approaches for CNN compression can be divided into three categories: network pruning [7], precision reduction [8] and design of compact network architectures [9].…”
Section: Introductionmentioning
confidence: 99%