1990
DOI: 10.1364/ol.15.000227
|View full text |Cite
|
Sign up to set email alerts
|

Optical implementation of large-scale neural networks using a time-division-multiplexing technique

Abstract: A new architecture for optical implementation of large-scale neural networks is proposed. This architecture is based on a time-division-multiplexing technique, in which both the neuron state vector and the interconnection matrix are divided in the time domain. Computer simulation and experimental results for associative memories show the effectiveness in implementing large-scale networks.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

1990
1990
1996
1996

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…Moreover, Eqs. (14), Let us compare the outputs of the two neural networks for a given input{u:} (i = 1, 2, . .…”
Section: Local Interconnection Neural Networkmentioning
confidence: 99%
“…Moreover, Eqs. (14), Let us compare the outputs of the two neural networks for a given input{u:} (i = 1, 2, . .…”
Section: Local Interconnection Neural Networkmentioning
confidence: 99%
“…There have been various works done recently by use of spatial light modulator (SLM): the vector-matrix multiplication system [ 11, lenslet array method 121, correlation system [3]. mirror-array system [4], free-space method [ 5 ] , time-division-multiplexing technique [6], use of the phase [7], and polarization (81 characteristics of SLM, etc., although approaches using other materials have also been studied, such as photorefractive crystals [9], holographic lenslet [ 101. neural chip [ll], and optical disk [121, etc.…”
Section: Key Termsmentioning
confidence: 99%
“…Figure 4 exhibits a plot of tuning ratio versus C,o/Ccb as obtained from the solution of Eq. (6). This figure also illustrates the curves of tuning ratio versus C,oICD for the two tuning varactors case.…”
mentioning
confidence: 91%
“…Thus, the calculation of LXk requires a matrixvector multiplication with an added vector D. This is a highly parallel computation and thus is well-suited for optical architectures. With time-multiplexing of the neuron values and interconnections [ 14], one can handle a large number of neurons in the network. 1.…”
Section: Matrx-vector Formulationmentioning
confidence: 99%