2020
DOI: 10.3390/electronics9010134
|View full text |Cite
|
Sign up to set email alerts
|

CENNA: Cost-Effective Neural Network Accelerator

Abstract: Convolutional neural networks (CNNs) are widely adopted in various applications. State-of-the-art CNN models deliver excellent classification performance, but they require a large amount of computation and data exchange because they typically employ many processing layers. Among these processing layers, convolution layers, which carry out many multiplications and additions, account for a major portion of computation and memory access. Therefore, reducing the amount of computation and memory access is the key f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 23 publications
0
5
0
Order By: Relevance
“…Neuromorphic computing emulates the activity of biological synapses by utilizing artificial neural networks in which synapses are massively interconnected in a dynamic and reconfigurable way to process information in an energy-efficient manner. [6][7][8][9] Although the bases of this computation paradigm were formulated in the 1990s, 8,10 a renewed interest has emerged in recent years, jump-started by the discovery and development of advanced materials that might address some of the essential requirements of brain-inspired computing. So far, emulation of artificial synapses has been achieved, to some extent, using phase change materials, 11 superconductors, 12 transistors, 13 spintronic devices 14,15 andmemristors.…”
Section: Introductionmentioning
confidence: 99%
“…Neuromorphic computing emulates the activity of biological synapses by utilizing artificial neural networks in which synapses are massively interconnected in a dynamic and reconfigurable way to process information in an energy-efficient manner. [6][7][8][9] Although the bases of this computation paradigm were formulated in the 1990s, 8,10 a renewed interest has emerged in recent years, jump-started by the discovery and development of advanced materials that might address some of the essential requirements of brain-inspired computing. So far, emulation of artificial synapses has been achieved, to some extent, using phase change materials, 11 superconductors, 12 transistors, 13 spintronic devices 14,15 andmemristors.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, it is key to have an energy-efficient model without degradation in performance. A rough estimate of energy cost per operation in 45nm 0.9V IC design can be calculated using Table 3 presented in [7,23,14]. The number of multiplication and addition operations in a standalone selfattention layer [20] can be calculated as…”
Section: Computational Analysismentioning
confidence: 99%
“…Accelerating deep neural network processing in edge computing using energy-efficient platforms is an important goal [ 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 ]. Currently, most object detection and classification models are carried out in graphics processing units.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, many lightweight approaches with low-power consumption and low-computational performance have emerged recently. A few dedicated neural network accelerators have been implemented on FPGA hardware platforms [ 12 , 14 , 17 , 21 , 23 ], while several authors proposed ASIC-based neural network accelerators [ 13 , 15 , 16 , 18 , 19 , 22 ]. Samimi et al [ 20 ] proposed a technique based on the residue number system to improve the energy efficiency of deep neural network processing.…”
Section: Introductionmentioning
confidence: 99%