2020
DOI: 10.1007/s00521-020-04994-5
|View full text |Cite
|
Sign up to set email alerts
|

On robust randomized neural networks for regression: a comprehensive review and evaluation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 55 publications
0
11
0
Order By: Relevance
“…The random coefficients in the hidden layers of the neural network are crucial to the performance and accuracy of ELM [30,29,13,6]. It has been observed from the numerical experiments in [6] that the ELM accuracy can be influenced strongly by the maximum magnitude of the random coefficients (i.e.…”
Section: Extreme Learning Machine and Random Coefficientsmentioning
confidence: 99%
See 3 more Smart Citations
“…The random coefficients in the hidden layers of the neural network are crucial to the performance and accuracy of ELM [30,29,13,6]. It has been observed from the numerical experiments in [6] that the ELM accuracy can be influenced strongly by the maximum magnitude of the random coefficients (i.e.…”
Section: Extreme Learning Machine and Random Coefficientsmentioning
confidence: 99%
“…[6]). ELM is one type of random-weight neural networks [37,13], which randomly assign and fix a subset of the network's weights so that the resultant optimization task of training the neural network can be simpler, and often linear, for example, formulated as a linear least squares problem. Randomization can be applied to both feed-forward and recurrent networks, leading to methodologies such as the random vector functional link (RVFL) networks [32,21], the extreme learning machine [20,19], the no-propagation network [45], the echo-state network [22,27], and the liquid state machine [28].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…The pursuit for higher accuracy and more competitive performance with neural networks for computational PDEs has led us in [10,11] to explore randomized neural networks (including ELM) [45,18]. Since optimizing the entire set of weight/bias coefficients in the neural network can be extremely hard and costly, perhaps randomly assigning and fixing a subset of the network's weights will make the resultant optimization task of network training simpler, and ideally linear, without severely sacrificing the achievable approximation capacity.…”
mentioning
confidence: 99%