1993
DOI: 10.1109/72.207614
|View full text |Cite
|
Sign up to set email alerts
|

Reduction of required precision bits for back-propagation applied to pattern recognition

Abstract: The number of precision bits for operations and data are limited in the hardware implementations of backpropagation (BP). Reduction of rounding error due to this limited precision is crucial in the implementation. The new learning algorithm is based on overestimation of significant error in order to alleviate underflow and omission of weight updating for correctly recognized patterns. While the conventional BP algorithm minimizes the squared error between output signals and supervising data, the new learning a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
8
0

Year Published

1998
1998
2007
2007

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(8 citation statements)
references
References 8 publications
0
8
0
Order By: Relevance
“…The "Breast Cancer Wisconsin" dataset has been obtained from the UCI repository 1 and is a two-class classification problem. We trained a network with nine inputs and two hidden neurons on the 683 patterns of this dataset, obtained discarding the ones with missing values.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The "Breast Cancer Wisconsin" dataset has been obtained from the UCI repository 1 and is a two-class classification problem. We trained a network with nine inputs and two hidden neurons on the 683 patterns of this dataset, obtained discarding the ones with missing values.…”
Section: Resultsmentioning
confidence: 99%
“…For these reasons, several methods have been proposed in the literature that deal with this problem. One possible solution aims at finding modified learning algorithms [1] that reduce the accuracy requirements of the network; a second, but equally important idea is to analyze the effect of weight errors in order to predict the final performance of the network. This has always been done using statistical or heuristic techniques [2]- [5] that can give an answer only in an average sense; furthermore, several limiting assumptions must be done (e.g., weight probability distributions, linearity conditions, etc.)…”
mentioning
confidence: 99%
“…As a rule, designers must balance implementation accuracy and performance reliability. Several weightquantization techniques have been developed to further reduce the required accuracy without deterioration of the network performance [4,5].…”
mentioning
confidence: 99%
“…The aim is to get an MLP with weights which are represented by less number of bits, which consumes less memory at the expense of loss of precision; example studies are given in [2,3,4,5,6]. In our approach which is novel, we do not reduce the precision; the operations are still with full precision.…”
Section: Introductionmentioning
confidence: 99%