2020 International Joint Conference on Neural Networks (IJCNN) 2020
DOI: 10.1109/ijcnn48605.2020.9206968
|View full text |Cite
|
Sign up to set email alerts
|

Feature Map Transform Coding for Energy-Efficient CNN Inference

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
4
1

Relationship

2
8

Authors

Journals

citations
Cited by 22 publications
(5 citation statements)
references
References 25 publications
0
4
0
Order By: Relevance
“…In a CNN, the same set of weights, also called filters or kernels, is used to convolve the input feature map at different locations, resulting in different output feature maps. Each filter generates one output feature map, and the number of filters determines the number of output feature maps produced by the layer [9]. Each output feature map represents a set of learned spatial features that the layer is sensitive to.…”
Section: Software Approachesmentioning
confidence: 99%
“…In a CNN, the same set of weights, also called filters or kernels, is used to convolve the input feature map at different locations, resulting in different output feature maps. Each filter generates one output feature map, and the number of filters determines the number of output feature maps produced by the layer [9]. Each output feature map represents a set of learned spatial features that the layer is sensitive to.…”
Section: Software Approachesmentioning
confidence: 99%
“…Lossy compression is those in which there is a loss of fidelity for natural images like photographs [17]. There are various lossy compression methods like transform coding [18], discrete cosine transform (DCT) [19], discrete wavelet transforms (DWT) [20], chroma subsampling [21], fractals lossless compression is generally used for medical imaging, drawings, comics. There are various methods for lossless compression like run-length coding, predictive coding, entropy coding, Huffman coding, Lempel Ziv Welch (LZW) [16], [22], [23].…”
Section: Introductionmentioning
confidence: 99%
“…To meet this rapidly increasing demand for AI capabilities on embedded systems, such as autonomous vehicles, drones, and medical devices, prior research focused on various techniques for reducing the power and energy consumption of NNs deployed on hardware accelerators, These techniques include network compression [Lebedev et al, 2015, Ullrich et al, 2017, Chmiel et al, 2020, Baskin et al, 2021a, pruning [Han et al, 2015, neural architecture search [Liu et al, 2019, Wu et al, 2019, Cai et al, 2019, and quantization [Zhou et al, 2016, Hubara et al, 2018.…”
Section: Introductionmentioning
confidence: 99%