2015
DOI: 10.1109/tcad.2015.2406853
|View full text |Cite
|
Sign up to set email alerts
|

On the Impact of Energy-Accuracy Tradeoff in a Digital Cellular Neural Network for Image Processing

Abstract: This paper studies the opportunities of energyaccuracy tradeoff in cellular neural network (CNN). Algorithmic characteristics of CNN is coupled with hardware-induced error distribution of a digital CNN cell to evaluate energy-accuracy tradeoff for simple image processing tasks as well as a complex application. The analysis shows that errors modulate the cell dynamics and propagate through the network degrading the output quality and increasing the convergence time. The error propagation is determined by the ta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…Here the we standardize and reshape the images to fit into the network. The Number of neurons in the hidden layer is considered to be 10 and the learning rate for input layer =0.4 and the learning rate for the output layer =0.3 [9].…”
Section: Neural Network Training Processmentioning
confidence: 99%
“…Here the we standardize and reshape the images to fit into the network. The Number of neurons in the hidden layer is considered to be 10 and the learning rate for input layer =0.4 and the learning rate for the output layer =0.3 [9].…”
Section: Neural Network Training Processmentioning
confidence: 99%
“…Learning efficiency and structural sparsification are two important issues in the study and application of neural networks. The learning efficiency is mainly concerned with the choice of learning method so as to achieve good learning accuracy for the training samples and generalization (test) accuracy for the untrained samples [1][2][3][4][5]. The aim of structural sparsification is to use less numbers of nodes and connections (weights) without causing damage to the learning efficiency [6][7][8][9][10].…”
Section: Introductionmentioning
confidence: 99%
“…[1][2][3][4][5] Neural networks usually possess the characteristics of adaptivity, nonlinearity, parallelism and distributed storage, which can be used to solve the complicated problems that can not be solved by other approaches. [6][7][8][9] Specifically, the applications of neural networks including (but not limited to) the pattern classification, [10,11] deep learning, [12,13] approximation and prediction, [14,15] image processing, [16,17] machine learning, [18,19] optimization and computation, [20,21] complex system control [22][23][24] (including the robot system control). [25,26] Due to the extensive and significant applications of neu-ral networks, the development and investigation of neural networks have become common and heated topics for the researchers in biology, mathematics, physics, and computer science.…”
Section: Introductionmentioning
confidence: 99%