2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541)
DOI: 10.1109/ijcnn.2004.1381133
|View full text |Cite
|
Sign up to set email alerts
|

Visual comparison of performance for different activation functions in MLP networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 5 publications
0
8
0
Order By: Relevance
“…In [18], the sigmoid activation function is modified by introducing the square of the argument, enhancing the mapping capabilities of the NN. In [19], two activation functions, one based on integration of the triangular function and one on the difference between two sigmoids (log-exponential), are proposed and compared through a barycentric plotting technique, which projects the mapping capabilities of the network in a hyper dimensional cube. The study has shown that log-exponential function has been slowly accelerated but it was effective in MLP network with backpropagation learning.…”
Section: Activation Functions For Easy Trainingmentioning
confidence: 99%
“…In [18], the sigmoid activation function is modified by introducing the square of the argument, enhancing the mapping capabilities of the NN. In [19], two activation functions, one based on integration of the triangular function and one on the difference between two sigmoids (log-exponential), are proposed and compared through a barycentric plotting technique, which projects the mapping capabilities of the network in a hyper dimensional cube. The study has shown that log-exponential function has been slowly accelerated but it was effective in MLP network with backpropagation learning.…”
Section: Activation Functions For Easy Trainingmentioning
confidence: 99%
“…Sometimes different activation functions [4][5][6][7][8][9][10][11][12][13][14] are acquired for different networks so that it resulting in better performances. An activation function or transfer functions for the hidden nodes in MLP are needed to introduce nonlinearity into the network.…”
Section: Activation Functionsmentioning
confidence: 99%
“…This happens due to improper selection of architecture, initialization of weights or data selection. Another factor that affects the training process is the selection of transfer function [10].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations