2020
DOI: 10.1063/1.5143815
|View full text |Cite
|
Sign up to set email alerts
|

Analog architectures for neural network acceleration based on non-volatile memory

Abstract: Analog hardware accelerators, which perform computation within a dense memory array, have the potential to overcome the major bottlenecks faced by digital hardware for data-heavy workloads such as deep learning. Exploiting the intrinsic computational advantages of memory arrays, however, has proven to be challenging principally due to the overhead imposed by the peripheral circuitry and due to the non-ideal properties of memory devices that play the role of the synapse. We review the existing implementations o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
81
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 128 publications
(81 citation statements)
references
References 143 publications
0
81
0
Order By: Relevance
“…While prior demonstrations rely on additional mathematical manipulations of the generated GRNs to establish control over their 𝜇𝜇 and 𝜎𝜎, we are able to achieve it without any additional manipulations or circuitry [22][23][24]35]. It is common practice in neural network accelerators to use two devices per synapse in order to map both positive and negative weights [37]. Here, the input to the synapse, 𝑉𝑉 in is applied as +𝑉𝑉 in and -𝑉𝑉 in to T + and T − , respectively, as shown in Fig.…”
Section: Gaussian Random Number Generator-based Synapsementioning
confidence: 99%
See 1 more Smart Citation
“…While prior demonstrations rely on additional mathematical manipulations of the generated GRNs to establish control over their 𝜇𝜇 and 𝜎𝜎, we are able to achieve it without any additional manipulations or circuitry [22][23][24]35]. It is common practice in neural network accelerators to use two devices per synapse in order to map both positive and negative weights [37]. Here, the input to the synapse, 𝑉𝑉 in is applied as +𝑉𝑉 in and -𝑉𝑉 in to T + and T − , respectively, as shown in Fig.…”
Section: Gaussian Random Number Generator-based Synapsementioning
confidence: 99%
“…The hardware for activation function in neural accelerators is generally realized using standard CMOS-based analog and digital components, and hence these implementations do not utilize the advantages offered by emerging materials [37]. Moreover, hyperbolic tangent (tanh) and sigmoid functions are highly non-linear, significantly complicating their hardware demonstration [38].…”
Section: Neurons With Modified Hyperbolic Tangent Activation Functionmentioning
confidence: 99%
“…When the weights are updated element-wise or row-wise in the crossbar array, the time complexity proportionally increases with an increase in the array size. Crossbar-compatible and fully parallel update schemes have thus been proposed to accelerate neural network training ( Burr et al, 2015 ; Gao et al, 2015 ; Kadetotad et al, 2015 ; Gokmen and Vlasov, 2016 ; Xiao et al, 2020 ). For the target crossbar array, by applying update pulses simultaneously to all rows and columns based on the neuron’s local knowledge of X and ÎŽ, respectively, the parallel updates in each cross point can be executed by the number of pulse overlaps.…”
Section: Hardware Neural Networkmentioning
confidence: 99%
“…Due to the energy-efficient features of memristors, attempts utilizing memristors to build synapses ( Wang et al, 2017 ) and neurons ( Wang et al, 2018 ) have been made, and achieved great progress. The conductance of RRAM can be modulated by electrical pulses either through a variably conductive filament or through the migration of oxygen vacancies ( Milo, 2020 ; Xiao et al, 2020 ). In addition, RRAM has attractive features, such as high scalability, low consumption power, fast write/read speed, stable storage, and multi-value tune ability.…”
Section: Introductionmentioning
confidence: 99%