2020 International Symposium on VLSI Technology, Systems and Applications (VLSI-TSA) 2020
DOI: 10.1109/vlsi-tsa48913.2020.9203740
|View full text |Cite
|
Sign up to set email alerts
|

MRAM Acceleration Core for Vector Matrix Multiplication and XNOR-Binarized Neural Network Inference

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 1 publication
0
7
0
Order By: Relevance
“…Binary/Ternary Neural Networks (BNN/TNN) [5]- [8] has created an opportunity to realize computational benefits by exploiting inherent analog computing capabilities of emerging resistive memories [9], [10]. Some of the NVM (non-volatile memory) technologies explored for IMC applications for analog multiplication include: Flash [11], [12], RRAM (resistive random access memory) [3], [13]- [16], and MRAM (magnetoresistive RAM) [17], [18]. RRAM based XNOR bitcells provide following advantages: (i) less area and non-volatility compared to SRAM (≈150 F 2 per bitcell), (ii) lower operating voltages and faster memory access time compared to Flash, and (iii) lower fabrication cost, reduced area and write energy compared to MRAM.…”
Section: Introductionmentioning
confidence: 99%
“…Binary/Ternary Neural Networks (BNN/TNN) [5]- [8] has created an opportunity to realize computational benefits by exploiting inherent analog computing capabilities of emerging resistive memories [9], [10]. Some of the NVM (non-volatile memory) technologies explored for IMC applications for analog multiplication include: Flash [11], [12], RRAM (resistive random access memory) [3], [13]- [16], and MRAM (magnetoresistive RAM) [17], [18]. RRAM based XNOR bitcells provide following advantages: (i) less area and non-volatility compared to SRAM (≈150 F 2 per bitcell), (ii) lower operating voltages and faster memory access time compared to Flash, and (iii) lower fabrication cost, reduced area and write energy compared to MRAM.…”
Section: Introductionmentioning
confidence: 99%
“…To overcome the above challenges posed by DNNs and interchip communications, on-chip in-memory BNNs with XNOR accumulation have been explored [2], [3], [5], [6], [10], [13], [17]. In [2], [13] in-memory BNN based on 6T and 8T SRAMs are demonstrated.…”
Section: Introductionmentioning
confidence: 99%
“…In CIM design, the processing speed of memory for AI calculation is an important factor that cannot be ignored, as well as low power. To meet these requirements, various next-generation memories such as Resistive RAM (RRAM) [7][8][9][10] and Magnetoresistive RAM (MRAM) [11,12] are emerging; however, as shown in Table 1, their speed is still lagging behind SRAM [13,14]. For this reason, SRAM has been considered the most suitable memory for CIM designs in recent years.…”
Section: Introductionmentioning
confidence: 99%