2021 IEEE International Symposium on Circuits and Systems (ISCAS) 2021
DOI: 10.1109/iscas51556.2021.9401695
|View full text |Cite
|
Sign up to set email alerts
|

STT-MRAM Architecture with Parallel Accumulator for In-Memory Binary Neural Networks

Abstract: In this paper, a row-wise XNOR accumulator architecture for STT-MRAM arrays is proposed for parallel and efficient multiply-and-accumulate (MAC) operation. The proposed accumulator supports in-memory computing and binary neural network (BNN) applications. In the proposed architecture, inputs are fed from the complementary bitlines, whereas readout is performed through a time-based sense amplifier (TBS). The proposed architecture that does not require any ADC can exhibit an average error rate of 0.085 for XNOR … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…2. Proposed usage of 2T2J STT-MRAM bitcell for single-bit XNOR of input feature and weight (modified from [23]). inputs of a given layer are applied to the wordlines of the array, and the bit-wise products with the respective weights stored in a column are summed up and accumulated in the form of its bitline current.…”
Section: Proposed In-memory Accelerator At Bitcell Levelmentioning
confidence: 99%
See 1 more Smart Citation
“…2. Proposed usage of 2T2J STT-MRAM bitcell for single-bit XNOR of input feature and weight (modified from [23]). inputs of a given layer are applied to the wordlines of the array, and the bit-wise products with the respective weights stored in a column are summed up and accumulated in the form of its bitline current.…”
Section: Proposed In-memory Accelerator At Bitcell Levelmentioning
confidence: 99%
“…The coordinated adoption of such techniques improves the energy efficiency of the periphery, eliminating voltage reference generation and distribution. This work is an extended version of our previous work in [23]. As additional contributions of this manuscript, select line boosting is introduced to further enhance the classification accuracy, and the area/throughput/energy/accuracy tradeoff is explored and quantified from circuit to algorithm.…”
mentioning
confidence: 99%
“…In-memory computing is a promising approach to alleviate the von Neumann bottleneck, where resistive memory devices are used for both processing and memory [2,3]. Inmemory computing generally requires fast, high-density, lowpower, scalable resistive memory devices, such as resistive random access memory (ReRAM) [4,5], phase-change memory (PCM) [6][7][8][9], ferroelectric RAM (FeRAM) [10], and magetoresistive RAM (MRAM) [11][12][13][14][15][16][17][18]. A crosspoint array of these resistive memory devices provides a hardware accelerator for the matrix-vector multiplication (MVM).…”
Section: Introductionmentioning
confidence: 99%