2015
DOI: 10.1109/ted.2015.2439635
|View full text |Cite
|
Sign up to set email alerts
|

Experimental Demonstration and Tolerancing of a Large-Scale Neural Network (165 000 Synapses) Using Phase-Change Memory as the Synaptic Weight Element

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

14
535
1
1

Year Published

2016
2016
2021
2021

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 755 publications
(551 citation statements)
references
References 9 publications
14
535
1
1
Order By: Relevance
“…A generic dot-product engine using memristor arrays for neuromorphic applications was introduced in 2016 18 , and a sparse coding chip that allows lateral neuron inhibition was developed using a 32 × 32 crossbar 31 , followed by the demonstration of principal component analysis though online learning in a 9 × 2 crossbar 32 . Large-scale neural networks have also been demonstrated using phase-change memory, following the same principle 33 . In SNNs, a common learning rule implemented is spike-timing-dependent plasticity -the synapse between two neurons is strengthened when the pre-synaptic neuron spike precedes the postsynaptic neuron spike, and is weakened if the reverse.…”
Section: Nature Electronicsmentioning
confidence: 99%
“…A generic dot-product engine using memristor arrays for neuromorphic applications was introduced in 2016 18 , and a sparse coding chip that allows lateral neuron inhibition was developed using a 32 × 32 crossbar 31 , followed by the demonstration of principal component analysis though online learning in a 9 × 2 crossbar 32 . Large-scale neural networks have also been demonstrated using phase-change memory, following the same principle 33 . In SNNs, a common learning rule implemented is spike-timing-dependent plasticity -the synapse between two neurons is strengthened when the pre-synaptic neuron spike precedes the postsynaptic neuron spike, and is weakened if the reverse.…”
Section: Nature Electronicsmentioning
confidence: 99%
“…For example, a neuromorphic chip developed by a Defense Advanced Research Projects Agency (DARPA) consortium is designed so that its CMOS-fixed synapses, which require offline training by a separate, conventional computer, could be replaced by matrices of tunable nanosynapses, which would allow the chip to learn [8]. Toward this end, today a huge research effort tries to realize dense arrays of nanodevices called memristors on top of CMOS neurons, because a single memristor can emulate a synapse [14]–[22]. …”
Section: Introductionmentioning
confidence: 99%
“…However, this device programming requires the use of extra circuit elements for monitoring the state of the memristor and shaping the spike accordingly. A second proposed approach is to consider multi-memristor synapses (compound synapse with stochastic programming) (Bill and Legenstein, 2014; Burr et al, 2015; Garbin et al, 2015; Prezioso et al, 2015) at the expense of increased area consumption. Only recently some works demonstrated analog behavior in both potentiation and depression without current or voltage control (Park et al, 2013; Covi et al, 2015, 2016; Matveyev et al, 2015; Brivio et al, 2016; Serb et al, 2016).…”
Section: Introductionmentioning
confidence: 99%