2023
DOI: 10.1109/tcad.2022.3222288
|View full text |Cite
|
Sign up to set email alerts
|

Training-Free Stuck-At Fault Mitigation for ReRAM-Based Deep Learning Accelerators

Abstract: Although Resistive RAMs can support highly efficient matrix-vector multiplication, which is very useful for machine learning and other applications, the non-ideal behavior of hardware such as stuck-at fault and IR drop is an important concern in making ReRAM crossbar array-based deep learning accelerators. Previous work has addressed the nonideality problem through either redundancy in hardware, which requires a permanent increase of hardware cost, or software retraining, which may be even more costly or unacc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 36 publications
0
1
0
Order By: Relevance
“…Another key issue is the algorithm-level innovations required for overcoming or minimizing the effect of device-level imperfections. Apart from attempts at compressing neural network architectures [ 179 ], RRAM weight mapping algorithms [ 180 ], noise-aware training algorithm [ 181 , 182 ] and fault mitigation algorithms [ 183 ] have been reported with much success in recent literature. An alternative strategy is the hardware-software codesign paradigm, where the inherent stochasticity of these devices is incorporated into neural network training and/or inference algorithms [ 184 , 185 ].…”
Section: Challenges and Future Outlookmentioning
confidence: 99%
“…Another key issue is the algorithm-level innovations required for overcoming or minimizing the effect of device-level imperfections. Apart from attempts at compressing neural network architectures [ 179 ], RRAM weight mapping algorithms [ 180 ], noise-aware training algorithm [ 181 , 182 ] and fault mitigation algorithms [ 183 ] have been reported with much success in recent literature. An alternative strategy is the hardware-software codesign paradigm, where the inherent stochasticity of these devices is incorporated into neural network training and/or inference algorithms [ 184 , 185 ].…”
Section: Challenges and Future Outlookmentioning
confidence: 99%