2018
DOI: 10.1109/tmag.2018.2848625
|View full text |Cite
|
Sign up to set email alerts
|

A Multilevel Cell STT-MRAM-Based Computing In-Memory Accelerator for Binary Convolutional Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
43
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 66 publications
(43 citation statements)
references
References 8 publications
0
43
0
Order By: Relevance
“…Some of them are considering the binary approximations, choosing an implementation based on emerging technologies. Some works [12,13,26,27] are based on MTJ technology while [15][16][17][18]28,29] have used RRAM. In each of these works the resistive element is used to perform simple logical operations based on current sensing technique.…”
Section: Nn Implementations Based On Lim Conceptmentioning
confidence: 99%
“…Some of them are considering the binary approximations, choosing an implementation based on emerging technologies. Some works [12,13,26,27] are based on MTJ technology while [15][16][17][18]28,29] have used RRAM. In each of these works the resistive element is used to perform simple logical operations based on current sensing technique.…”
Section: Nn Implementations Based On Lim Conceptmentioning
confidence: 99%
“…Sebastian et al [20] exploit the physical dynamics of PCM material and propose computational PCM to perform the temporal correlation detection between stochastic binary processes. One Multi-level STT-RAM cell Figure 2: The execution flow, computing array, and multilevel STT-RAM cell for convolutional layers of Binary CNN in [12]. process is encoded into a SET pulse whose amplitude or duration is proportional to the instantaneous sum of all processes and enters the assigned PCM device.…”
Section: Logic and Basic Arithmetic Operationsmentioning
confidence: 99%
“…The authors in Reference [3] use an MTJ subarray as part of an accelerator for low bit-width convolutional NN. Recent work also covers multi-level MRAM cells to implement BNN [32]. Networks with binary weights utilizing STT-MRAM were proposed in Reference [2].…”
Section: Related Workmentioning
confidence: 99%
“…Pinatubo [27] embeds digital logic circuits into the memory for inter-subarray computation. The authors in Reference [32] use an auxiliary processing unit to perform batch-normalization, multiplication, and pooling. The design in Reference [3] performs operations such as bit counting, summation, quantization, and batch normalization external to the array.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation