2016
DOI: 10.17706/jsw.11.5.440-454
|View full text |Cite
|
Sign up to set email alerts
|

Impact of Variances of Random Weights and Biases on Extreme Learning Machine

Abstract: Abstract:Although the uniform convergence of extreme learning machine (ELM) has been proved for any continues probability distribution, the variances of random numbers initializing input-layer weights and hidden-layer biases indeed have the obvious impact on generalization performance of ELM. In this paper, we validate this effect by testing the classification accuracies of ELMs initialized by the random numbers with different variances. We select three commonly-used probability distributions (i.e., Uniform, G… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…Even though ELM parameters are chosen at random, the probability distribution used for sampling them may impact the model performance. Authors of [45] compare various distributions with different variances. Authors recommend an excellent default choice of distribution-Gaussian with mean equal to 0 and a standard deviation less than or equal to 0.1.…”
Section: Square Root Poolingðxþmentioning
confidence: 99%
“…Even though ELM parameters are chosen at random, the probability distribution used for sampling them may impact the model performance. Authors of [45] compare various distributions with different variances. Authors recommend an excellent default choice of distribution-Gaussian with mean equal to 0 and a standard deviation less than or equal to 0.1.…”
Section: Square Root Poolingðxþmentioning
confidence: 99%