Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2019
DOI: 10.1109/tcad.2018.2871198
|View full text |Cite
|
Sign up to set email alerts
|

Design Space Exploration of Neural Network Activation Function Circuits

Abstract: The widespread application of artificial neural networks has prompted researchers to experiment with FPGA and customized ASIC designs to speed up their computation. These implementation efforts have generally focused on weight multiplication and signal summation operations, and less on activation functions used in these applications. Yet, efficient hardware implementations of nonlinear activation functions like Exponential Linear Units (ELU), Scaled Exponential Linear Units (SELU), and Hyperbolic Tangent (tanh… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(18 citation statements)
references
References 14 publications
0
16
0
Order By: Relevance
“…After the characterization and modeling of a single column cell, we have simulated the multiply-accumulate operation in a single tile. The design space of a single tile has been considered according to [24]. The BL, WL and SL resistance has been calculated according to [25].…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…After the characterization and modeling of a single column cell, we have simulated the multiply-accumulate operation in a single tile. The design space of a single tile has been considered according to [24]. The BL, WL and SL resistance has been calculated according to [25].…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…When the equation is examined, when 𝑥 > 0 value, ReLU behaves like an activation function. SELU can perform the learning process robustly due to its selfnormalization feature, analytically zero mean and unit variance convergence, and allows it to be trained over many layers [22,23].…”
Section: Selu Activation Functionmentioning
confidence: 99%
“…x depicts the input vector and b shows the bias vector. There are multiple activation functions that are being used in literature, such as Rectified Linear Unit (ReLU), Leaky ReLU, Parametric ReLU (PReLU) [44], Exponential Linear Unit (ELU) and Scaled ELU (SELU) [45]. These are explained below.…”
Section: ) Convolution Layermentioning
confidence: 99%
“…The only difference is that it uses two parameters in order to reduce vanishing gradient issue, which improves the learning speed of the model. It is given in Equation 5, [45].…”
Section: ) Convolution Layermentioning
confidence: 99%