2018
DOI: 10.1109/jssc.2018.2867275
|View full text |Cite
|
Sign up to set email alerts
|

A Variation-Tolerant In-Memory Machine Learning Classifier via On-Chip Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
38
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 102 publications
(38 citation statements)
references
References 16 publications
0
38
0
Order By: Relevance
“…One potential solution to this problem is to train the network fully on hardware [13][14][15] , such that all hardware non-idealities would be de facto included as constraints during training. Another similar approach is to perform partial optimizations of the hardware weights after transferring a trained model to the chip 9,16,17 . The drawback of these approaches is that every neural network would have to be trained on each individual chip before deployment.…”
mentioning
confidence: 99%
“…One potential solution to this problem is to train the network fully on hardware [13][14][15] , such that all hardware non-idealities would be de facto included as constraints during training. Another similar approach is to perform partial optimizations of the hardware weights after transferring a trained model to the chip 9,16,17 . The drawback of these approaches is that every neural network would have to be trained on each individual chip before deployment.…”
mentioning
confidence: 99%
“…In relation to the throughput and energy efficiency figures, i.e., TOP/s and TOP/s/W, it has to be noted that the bit precision is not taken into account, thus putting the lowest precision implementations at an advantage. To adequately reflect the additional computational complexity tackled by multibit accelerators, the respective quantization of weight n w and input n x can be factored in, similar to the approach taken in [19], yielding precision scaled TOP/s and TOP/s/W. This is shown in Table III, where recent implementations of analog in-memory MAC-operation accelerators using SRAM combined with capacitors [17], [18], [30], [31] are compared with the presented work.…”
Section: System Implementation Study and Analysismentioning
confidence: 99%
“…Since the total capacitance has an exponential dependence on the number of weight bits (∼2 n w ), the chip area scales exponentially with the number of bits as well. Finally, note that the overall area overhead introduced for enabling the IMC capabilities remains manageable since, similar to [19], the standard SRAM-macrointernals remained unmodified, and exclusively, pitch-matched components are added to the periphery. In summary, the system described in [30] delivers high energy and area efficiency for the selected 4-bit input and weight quantization with the relatively low throughput being the only downside.…”
Section: System Implementation Study and Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…The stochasticity in the device conductance can cause current variations in bit-cells. For example, conductance variation in can be due to stochasticity in filament formation in ReRAM [16,17,21], threshold voltage variation of access transistors in SRAM [7,22,34], or stochasticity in the threshold voltage for FeFET [24]. Ultimately conductance variation leads to inaccurate VMM operation [21].…”
Section: Background 21 Device Variation and Pim-based Vmmmentioning
confidence: 99%