ISCAS 2001. The 2001 IEEE International Symposium on Circuits and Systems (Cat. No.01CH37196)
DOI: 10.1109/iscas.2001.921290
|View full text |Cite
|
Sign up to set email alerts
|

VLSI neural network with digital weights and analog multipliers

Abstract: A VLSI feedforward neural network is presented that makes use of digital weights and analog multipliers. The network is trained in a chip-in-loop fashion with a host computer implementing the training algorithm. The chip uses a serial digital weight bus implemented by a long shift register to input the weights. The inputs and outputs of the network are provided directly at pins on the chip. The training algorithm used is a parallel weight perturbation technique [1]. Training results are shown for a 2 input, 1 … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 9 publications
0
10
0
Order By: Relevance
“…A chip implementing the above circuits was fabricated in a 1.2-m CMOS process [11]. All synapse and neuron transistors were 3.6 m/3.6 m to keep the layout small.…”
Section: E Test Resultsmentioning
confidence: 99%
“…A chip implementing the above circuits was fabricated in a 1.2-m CMOS process [11]. All synapse and neuron transistors were 3.6 m/3.6 m to keep the layout small.…”
Section: E Test Resultsmentioning
confidence: 99%
“…Other approaches for on-chip supervised weight training have been utilized. These approaches include the least-mean-squares algorithm [750], [787], [1025], [1026], weight perturbation [19], [625], [655], [669], [682], [698], [699], [708], [710], [712], [713], [715], [736], [834], [835], [841], [845]- [847], [856], [1078]- [1080], [1098], [1099], [1148], [1304], training specifically for convolutional neural networks [1305], [1306] and others [169], [220], [465], [714], [804], [864], [865], [1029], [1049], [1307]- [1320]. Other on-chip supervised learning mechanisms are built for particular model types, such as Boltzmann machines, restricted Boltzmann machines, or deep belief networks [12], [627], [1135], [1193]<...>…”
Section: A Supervised Learningmentioning
confidence: 99%
“…In addition, his simple synapses involve single transistors. Noteworthy neurons capable of learning have been proposed [35,[41][42][43][44][45][46][47]. Koosh and Goodman [42] put a digital computer in the loop for training, control and weight updates, and the neural network is entirely analog, a style realized by several research groups.…”
Section: Neural Modelingmentioning
confidence: 99%