1996
DOI: 10.1142/s0218126696000108
|View full text |Cite
|
Sign up to set email alerts
|

PERTURBATION TECHNIQUES FOR ON-CHIP LEARNING WITH ANALOGUE VLSI MLPs

Abstract: Microelectronic neural network technology has become sufficiently mature over the past few years that reliable performance can now be obtained from VLSI circuits under carefully controlled conditions (see Refs. 8 or 13 for example). The use of analogue VLSI allows low power, area efficient hardware realisations which can perform the computationally intensive feed-forward operation of neural networks at high speed, making real-time applications possible. In this paper we focus on important issues for the succe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

1997
1997
1999
1999

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…69 Another strategy, the one on which we concentrate here, is to make some alteration to the standard back-propagation algorithm which will simplify the hardware implementation but retain the essentials of the algorithm. 70,71 The examples we give here are: the virtual targets algorithm, which reduces the difference between the hidden and output layers, at the expense of introducing explicit targets for the hidden layer; and the weight perturbation algorithm, which avoids calculating error/weight gradients by measuring them instead.…”
Section: Algorithms For On-chip Learningmentioning
confidence: 99%
“…69 Another strategy, the one on which we concentrate here, is to make some alteration to the standard back-propagation algorithm which will simplify the hardware implementation but retain the essentials of the algorithm. 70,71 The examples we give here are: the virtual targets algorithm, which reduces the difference between the hidden and output layers, at the expense of introducing explicit targets for the hidden layer; and the weight perturbation algorithm, which avoids calculating error/weight gradients by measuring them instead.…”
Section: Algorithms For On-chip Learningmentioning
confidence: 99%