Proceedings of the International Symposium on Low Power Electronics and Design 2018
DOI: 10.1145/3218603.3218616
|View full text |Cite
|
Sign up to set email alerts
|

In-situ Stochastic Training of MTJ Crossbar based Neural Networks

Abstract: Owing to high device density, scalability and non-volatility, Magnetic Tunnel Junction-based crossbars have garnered signi cant interest for implementing the weights of an arti cial neural network. e existence of only two stable states in MTJs implies a high overhead of obtaining optimal binary weights in so ware. We illustrate that the inherent parallelism in the crossbar structure makes it highly appropriate for in-situ training, wherein the network is taught directly on the hardware. It leads to signi cantl… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…Previous works used the stochastic behavior of the STT-MTJ, or other memristive technologies such as resistive RAM (RRAM), to implement hardware accelerators for BNNs [9][10][11][12]. In [9], the research focus was on the architecture level of BNN accelerators, without supporting training.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Previous works used the stochastic behavior of the STT-MTJ, or other memristive technologies such as resistive RAM (RRAM), to implement hardware accelerators for BNNs [9][10][11][12]. In [9], the research focus was on the architecture level of BNN accelerators, without supporting training.…”
Section: Introductionmentioning
confidence: 99%
“…A recently proposed MTJ-based binary synapse comprising a single transistor and a single MTJ device (1T1R) [12] supports training QNNs with binary weights and real value activations. Mondal and Srivastava [12] exploited analog computation to support processing near memory (PNM). Their design, however, requires two update operations to execute the SGD updates.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, emerging resistive memory (RRAM) nanodevice based arrays offer an efficient and compact option for performing VMM operations in hardware. Several research groups [2]- [6] have demonstrated this analog computing method for a variety of applications such as linear equation solver [6], image processing [7], data compression [8], feature extraction [9], neural network inference [10], in-situ training [10], [11], etc. Multiple emerging memory nanodevice technologies have been utilized for this application : OxRAM [1], MRAM [11], PCM [12], Ferroelectric FET [13], ECRAM [14], Flash [15], etc.…”
Section: Introductionmentioning
confidence: 99%
“…Several research groups [2]- [6] have demonstrated this analog computing method for a variety of applications such as linear equation solver [6], image processing [7], data compression [8], feature extraction [9], neural network inference [10], in-situ training [10], [11], etc. Multiple emerging memory nanodevice technologies have been utilized for this application : OxRAM [1], MRAM [11], PCM [12], Ferroelectric FET [13], ECRAM [14], Flash [15], etc. Owing to their higher device density, crossbar structures are preferred for VMM applications [16].…”
Section: Introductionmentioning
confidence: 99%
“…In particular, we have explored the accuracy of a 784×200×10 DBN as shown in figure 1 for MNIST pattern recognition, noting that the realization of a synaptic network with MRAM elements presents an attractive opportunity to build an all-MRAM based DBN. It is also worthwhile noticing that the compound synaptic structure proposed here can readily be combined with other neuromorphic computing architectures as discussed in [10] [12][13] [14].…”
mentioning
confidence: 99%