2019
DOI: 10.1109/tcad.2018.2855145
|View full text |Cite
|
Sign up to set email alerts
|

Fault-Tolerant Training Enabled by On-Line Fault Detection for RRAM-Based Neural Computing Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(17 citation statements)
references
References 34 publications
0
17
0
Order By: Relevance
“…Therefore, poor switching endurance could indirectly lead to low number of conductance states or even failure such as stuck at fault where only one conductance state exists. [76] The impossibility to update the conductance decreases the ANN accuracy, [59] even more so for ex situ training where weights are supposed to be mapped on working devices. The same analysis can be made with the device-to-device variability parameter, which becomes a problem only if this variability concerns critical device characteristics like cycle-to-cycle variability [66] or the overall asymmetry of conductance variation.…”
Section: Box 1 Analysis Of Nonideal Parameters Of Rs Memories That Imentioning
confidence: 99%
“…Therefore, poor switching endurance could indirectly lead to low number of conductance states or even failure such as stuck at fault where only one conductance state exists. [76] The impossibility to update the conductance decreases the ANN accuracy, [59] even more so for ex situ training where weights are supposed to be mapped on working devices. The same analysis can be made with the device-to-device variability parameter, which becomes a problem only if this variability concerns critical device characteristics like cycle-to-cycle variability [66] or the overall asymmetry of conductance variation.…”
Section: Box 1 Analysis Of Nonideal Parameters Of Rs Memories That Imentioning
confidence: 99%
“…The effect of faults occurring in the storage of the input is also considered in [13], and [14] proposes on-chip learning for support-vector machines, while decreasing the learning effort using active learning. Finally, a slightly different problem is considered in [15], [16], where the network is trained to compensate for known defect locations.…”
Section: Related Workmentioning
confidence: 99%
“…Reliability analysis, post fabrication test, design-for-test and design-for-reliability are commonly used when dealing with traditional computing architectures, however, they are not common practice when dealing with neuromorphic structures. In this context, there are several research works focusing on the fault tolerance (and how it can be improved) of artificial neural networks (ANNs) [44], on boosting fault tolerance of hardware implemented neural accelerators [45], and even on the effect of fabrication-induced variability of memristive devices on the behavior of deep networks [46] and SNNs [47]. These papers show that faulty neurons have stronger impact on the neural network's behavior than faulty synapses.…”
Section: Neuromorphic Computing Paradigms and Test/reliability Ismentioning
confidence: 99%