The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2018 5th International Conference on Systems and Informatics (ICSAI) 2018
DOI: 10.1109/icsai.2018.8599372
|View full text |Cite
|
Sign up to set email alerts
|

Improving Convolutional Neural Network Using Pseudo Derivative ReLU

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 21 publications
(8 citation statements)
references
References 4 publications
0
8
0
Order By: Relevance
“…Another advantage of ReLU is that it is easy to compute as the output equals the input if the input is non-negative; otherwise, it equals 0 . This ability can alleviate the gradient vanishing and exploding problems that usually occur with the sigmoid or tanh activation functions [40]. Various optimization algorithms, namely, Adam (Adaptive Moment Estimation), SGD (Stochastic Gradient Descent), and RMSprop were employed during the optimization of the model.…”
Section: Proposed Modelmentioning
confidence: 99%
“…Another advantage of ReLU is that it is easy to compute as the output equals the input if the input is non-negative; otherwise, it equals 0 . This ability can alleviate the gradient vanishing and exploding problems that usually occur with the sigmoid or tanh activation functions [40]. Various optimization algorithms, namely, Adam (Adaptive Moment Estimation), SGD (Stochastic Gradient Descent), and RMSprop were employed during the optimization of the model.…”
Section: Proposed Modelmentioning
confidence: 99%
“…The ReLU and the Softsign functions are applied. The ReLU function performs a threshold operation to set any input less than zero to zero [29]. The Softsign function has the flatr curve and slow decreasing derivatives for more efficient learning [30].…”
Section: Activation Layermentioning
confidence: 99%
“…Besides, they have an acceptable degree of smoothness and are easily differentiated, 44 unlike a ReLU function, which has a differentiation problem that can lead to a dying ReLU problem. 45 The interested reader is referred to 32 for a detailed description of DNN and autoencoder algorithms.…”
Section: Fkm Of An Inter-module: Deep Neural Network-based Solutionmentioning
confidence: 99%