The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2022
DOI: 10.1002/aisy.202200029
|View full text |Cite
|
Sign up to set email alerts
|

Tolerating Noise Effects in Processing‐in‐Memory Systems for Neural Networks: A Hardware–Software Codesign Perspective

Abstract: Neural networks have been widely used for advanced tasks from image recognition to natural language processing. Many recent works focus on improving the efficiency of executing neural networks in diverse applications. Researchers have advocated processing‐in‐memory (PIM) architecture as a promising candidate for training and testing neural networks because PIM design can reduce the communication cost between storage and computing units. However, there exist noises in the PIM system generated from the intrinsic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…Apart from attempts at compressing neural network architectures [ 179 ], RRAM weight mapping algorithms [ 180 ], noise-aware training algorithm [ 181 , 182 ] and fault mitigation algorithms [ 183 ] have been reported with much success in recent literature. An alternative strategy is the hardware-software codesign paradigm, where the inherent stochasticity of these devices is incorporated into neural network training and/or inference algorithms [ 184 , 185 ]. Finally, the technological adaptation of RRAM devices for neuromorphic computing requires major innovations in terms of scaling capabilities.…”
Section: Challenges and Future Outlookmentioning
confidence: 99%
“…Apart from attempts at compressing neural network architectures [ 179 ], RRAM weight mapping algorithms [ 180 ], noise-aware training algorithm [ 181 , 182 ] and fault mitigation algorithms [ 183 ] have been reported with much success in recent literature. An alternative strategy is the hardware-software codesign paradigm, where the inherent stochasticity of these devices is incorporated into neural network training and/or inference algorithms [ 184 , 185 ]. Finally, the technological adaptation of RRAM devices for neuromorphic computing requires major innovations in terms of scaling capabilities.…”
Section: Challenges and Future Outlookmentioning
confidence: 99%
“…These imperfections can result in degraded accuracy for neural-network inference performed using them [9,[17][18][19]. To mitigate the impact of noise, noise-aware training schemes have been developed [20][21][22][23][24][25][26][27]. These schemes treat the noise as a relatively small perturbation to an otherwise deterministic computation, either by explicitly modeling the noise as the addition of random variables to the processor's output or by modeling the processor as having finite bit precision.…”
mentioning
confidence: 99%