2020
DOI: 10.3389/fnins.2020.00406
|View full text |Cite
|
Sign up to set email alerts
|

Mixed-Precision Deep Learning Based on Computational Memory

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
72
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 87 publications
(86 citation statements)
references
References 54 publications
0
72
0
Order By: Relevance
“…In order to reduce the data transfers to a minimum in inference accelerators, a promising avenue is to employ in-memory computing using non-volatile memory devices [3][4][5] . Both charge-based storage devices, such as Flash memory 6 , and resistance-based (memristive) storage devices, such as metal-oxide resistive randomaccess memory [7][8][9][10] and phase-change memory (PCM) [11][12][13][14] are being investigated for this. In this approach, the network weights are encoded as the analog charge state or conductance state of these devices organized in crossbar arrays, and the matrix-vector multiplications during inference can be performed in-situ in a single time step by exploiting Kirchhoff's circuit laws.…”
mentioning
confidence: 99%
“…In order to reduce the data transfers to a minimum in inference accelerators, a promising avenue is to employ in-memory computing using non-volatile memory devices [3][4][5] . Both charge-based storage devices, such as Flash memory 6 , and resistance-based (memristive) storage devices, such as metal-oxide resistive randomaccess memory [7][8][9][10] and phase-change memory (PCM) [11][12][13][14] are being investigated for this. In this approach, the network weights are encoded as the analog charge state or conductance state of these devices organized in crossbar arrays, and the matrix-vector multiplications during inference can be performed in-situ in a single time step by exploiting Kirchhoff's circuit laws.…”
mentioning
confidence: 99%
“…Reproduced with permission. [44] Copyright 2020, Frontiers Media. projected PCM is a promising approach toward tackling "drift."…”
Section: Acceleratorsmentioning
confidence: 99%
“…Under this circumstance, some DNNs architectures with the novel learning algorithms were proposed during last 5 years. One intriguing feature of the DNN training is reportedly to perform the forward and backward propagations imprecisely while the gradients need to be accumulated in high precision, which induced the debut of a mixed-precision in-memory computing approach [166]. The key idea is to store the synaptic weights in phase-change devices where the forward and backward passes are performed, whereas the weight changes are accumulated in high precision, as shown in Figures 24(a)-(e).…”
Section: Phase-change Neuro Networkmentioning
confidence: 99%
“…The synaptic weights are changed by the pulses applied to the memory devices once the accumulated weight exceeds the threshold value. Inspired by this idea, a two-layered neural network having 2-phase-change devices in the differential configuration that indicate the synaptic weights was utilized to solve the handwritten digit classification problem [166]. The test accuracy after 20 epochs of the training was approximately 98%.…”
Section: Phase-change Neuro Networkmentioning
confidence: 99%
See 1 more Smart Citation