2019
DOI: 10.1007/978-981-13-6772-4_76
|View full text |Cite
|
Sign up to set email alerts
|

Comparative Study of Convolution Neural Network’s Relu and Leaky-Relu Activation Functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
58
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 134 publications
(58 citation statements)
references
References 7 publications
0
58
0
Order By: Relevance
“…The input is one-dimensional feature vector X =[ x 1 , x 2 , x 3 ,…, x N ] with a length of N corresponding to EEG signals of N channels; the convolution layer is composed of K convolution kernels, the size of each convolution kernel is 1 ∗S , the coefficient of the convolution kernel is w k ∈ R s , k =1,2,…, K , and the output is h =[ h 1 , h 2 , h 3 ,…, h k ] ∈ R ( N − S +1) K , where where b k denotes the bias of the convolution kernel and R denotes the nonlinear activation function that adopts the Leaky ReLU function [ 39 , 40 ]. …”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The input is one-dimensional feature vector X =[ x 1 , x 2 , x 3 ,…, x N ] with a length of N corresponding to EEG signals of N channels; the convolution layer is composed of K convolution kernels, the size of each convolution kernel is 1 ∗S , the coefficient of the convolution kernel is w k ∈ R s , k =1,2,…, K , and the output is h =[ h 1 , h 2 , h 3 ,…, h k ] ∈ R ( N − S +1) K , where where b k denotes the bias of the convolution kernel and R denotes the nonlinear activation function that adopts the Leaky ReLU function [ 39 , 40 ]. …”
Section: Methodsmentioning
confidence: 99%
“…where b k denotes the bias of the convolution kernel and R denotes the nonlinear activation function that adopts the Leaky ReLU function [39,40].…”
Section: Shallow Convolution Neural Networkmentioning
confidence: 99%
“…Similarly, subtask 2 is allocated for identifying the health state of the bearing, which is composed of two convolution layers, one pooling layer, two fully connected layers, and the final output layers. For the activation of the fully connected layers of this framework, leaky rectified linear unit (Leaky ReLU) [ 65 ] is considered. To prevent the overfitting problem, L2 regularization with a value of 0.04 is attained on the layer before the output layer.…”
Section: Proposed Methodologymentioning
confidence: 99%
“…Some changes were made to the default UNet architecture. First, we used the leaky rectified linear unit (ReLU) activation function as opposed to a traditional ReLU in the convolutional layers to avoid "dying ReLU" issues [84]. We also used the AdaMax optimizer [85] instead of Adam (Adaptive momentum estimation) or RMSProp (Root Mean Square Propagation) and included a callback to reduce the learning rate if the loss plateaued for more than 5 epochs.…”
Section: Modeling Trainingmentioning
confidence: 99%