The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022
DOI: 10.3389/felec.2022.825077
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Non-idealities of Resistive Switching Memories for Efficient Machine Learning

Abstract: Novel computing architectures based on resistive switching memories (also known as memristors or RRAMs) have been shown to be promising approaches for tackling the energy inefficiency of deep learning and spiking neural networks. However, resistive switch technology is immature and suffers from numerous imperfections, which are often considered limitations on implementations of artificial neural networks. Nevertheless, a reasonable amount of variability can be harnessed to implement efficient probabilistic or … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 150 publications
0
5
0
Order By: Relevance
“…[ 147 ] Although the stochasticity in DTD and CTC variations of fabricated memristors limits the ability to control the device's conductance (and other properties), it can be exploited in neuromorphic settings to overcome other issues, such as “overfitting.” Overfitting happens when a network fails to generalize because it is tuned mainly to the training data. [ 148 ] By utilizing this stochastic property, previous research has shown that overfitting problem can be overcome. [ 149 ]…”
Section: Discussion and Prospectsmentioning
confidence: 99%
See 1 more Smart Citation
“…[ 147 ] Although the stochasticity in DTD and CTC variations of fabricated memristors limits the ability to control the device's conductance (and other properties), it can be exploited in neuromorphic settings to overcome other issues, such as “overfitting.” Overfitting happens when a network fails to generalize because it is tuned mainly to the training data. [ 148 ] By utilizing this stochastic property, previous research has shown that overfitting problem can be overcome. [ 149 ]…”
Section: Discussion and Prospectsmentioning
confidence: 99%
“…Overfitting happens when a network fails to generalize because it is tuned mainly to the training data. [148] By utilizing this stochastic property, previous research has shown that overfitting problem can be overcome. [149] Some research efforts have been dedicated to eliminating variability by tuning the device fabrication process.…”
Section: Device Level Issuesmentioning
confidence: 99%
“…The RRAM-based accelerator suffers from various sources of variations and noises [12]. It is difficult to precisely transfer learned weights into the hardware accelerator due to these variations.…”
Section: Weight Transfer With Verificationmentioning
confidence: 99%
“…A critical issue related to the development of physical NN are the so-called 'non-idealities' of memristors, which can affect the NN performance [20,21,[27][28][29][30][31]. These include distinct P-D curves with constrained conductance windows ΔG and discrete number of conductance levels N (the granularity of the curves can be defined as the ratio between the latter and the former).…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, it is well established on standard NN that the addition of noise during the training can improve the NN generalization ability and reduce training losses [30]. A usual interpretation for the latter is that a reasonable amount of noise helps the system to avoid the Loss function to stabilize on local minima, favouring the convergence to a global minimum.…”
Section: Introductionmentioning
confidence: 99%