Proceedings of the International Joint Conference on Neural Networks, 2003.
DOI: 10.1109/ijcnn.2003.1223373
|View full text |Cite
|
Sign up to set email alerts
|

Possible nanoelectronic implementation of neuromorphic networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
3
0

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 8 publications
1
3
0
Order By: Relevance
“…In this way, each of two four-switch synapses connecting the cell pair is exposed to training only once and, as described in the end of the CROSSNETS section, probabilities of connection of its synapses are saturated to provide virtually deterministic weights This is the so-called "clipped Hebb rule" that is known to work very well for fully connected Hopfield networks. 55,56 Our estimates 35 show that for a CrossNet trained by this method, P max should scale as M; this is as much as could be expected from the wellknown theory of randomly diluted networks. 55,56 (The locality of CrossNets is to some extent similar to dilution.…”
Section: Hopfield-mode Trainingsupporting
confidence: 67%
See 1 more Smart Citation
“…In this way, each of two four-switch synapses connecting the cell pair is exposed to training only once and, as described in the end of the CROSSNETS section, probabilities of connection of its synapses are saturated to provide virtually deterministic weights This is the so-called "clipped Hebb rule" that is known to work very well for fully connected Hopfield networks. 55,56 Our estimates 35 show that for a CrossNet trained by this method, P max should scale as M; this is as much as could be expected from the wellknown theory of randomly diluted networks. 55,56 (The locality of CrossNets is to some extent similar to dilution.…”
Section: Hopfield-mode Trainingsupporting
confidence: 67%
“…We have used this property to demonstrate 34,35 successful training of InBar as a Hopfield (we use this commonly accepted name despite the considerable controversy concerning the authorship of this concept 61 ) network. 34 During this demonstration, we self-imposed all the restrictions anticipated for the future hardware CrossNet implementations.…”
Section: Hopfield-mode Trainingmentioning
confidence: 99%
“…The grid-like connectivity of the brain can be easily translated into a 2D or layered 3D crossbar array architecture with synaptic devices, bringing it one step closer to reach brain-level connectivity and synaptic density. Hybrid CMOS/nanoelectronic device networks or crossnets have been proposed to build neuromorphic systems [124][125][126][127][128]. These networks may be utilized to perform cognitive tasks which were originally implemented in software using neural network algorithms.…”
Section: Targeted Computing Applications With Synaptic Devicesmentioning
confidence: 99%
“…Since the capacity of such memory is very weakly affected by synaptic weight discreteness, a CrossNet with just one latching switch per synapse may operate very well in this mode; its main difference from the generic Hopfield net is the quasi-local (rather than global) connectivity M, limiting its capacity to ∼0.45 M at 99% restoration fidelity. 53 Figure 12 shows an example of such an operation; the final image is completely error free. However, the most remarkable feature of the pattern restoration is its speed (∼ 5 RC), taking into account that in realistic CrossNets the RC time constant may be below 1 s. (See Section 3 above.)…”
Section: Hopfield Network: Pattern Recognitionmentioning
confidence: 99%