2021
DOI: 10.48550/arxiv.2103.08042
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Modified Batch Intrinsic Plasticity Method for Pre-training the Random Coefficients of Extreme Learning Machines

Suchuan Dong,
Zongwei Li

Abstract: In extreme learning machines (ELM) the hidden-layer coefficients are randomly set and fixed, while the output-layer coefficients of the neural network are computed by a least squares method. The randomlyassigned coefficients in ELM are known to influence its performance and accuracy significantly. In this paper we present a modified batch intrinsic plasticity (modBIP) method for pre-training the random coefficients in the ELM neural networks. The current method is devised based on the same principle as the bat… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
21
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(22 citation statements)
references
References 44 publications
1
21
0
Order By: Relevance
“…Therefore, in order to compute the Jacobian matrix we first solve equation ( 4) for β LS by the linear least squares method, and then use (10) to compute J 0 (θ). We then compute J 1 (θ) by equations ( 13) and (14). Finally the Jacobian matrix ∂r ∂θ is computed by equation ( 12).…”
Section: Variable Projection Methods For Solving Linear Pdesmentioning
confidence: 99%
See 2 more Smart Citations
“…Therefore, in order to compute the Jacobian matrix we first solve equation ( 4) for β LS by the linear least squares method, and then use (10) to compute J 0 (θ). We then compute J 1 (θ) by equations ( 13) and (14). Finally the Jacobian matrix ∂r ∂θ is computed by equation ( 12).…”
Section: Variable Projection Methods For Solving Linear Pdesmentioning
confidence: 99%
“…As in our previous works [13,14,17], we employ a fixed seed value for the random number generators in the numerical experiments in each subsection, so that the reported results here can be exactly reproducible. We use the same seed for the random number generators from the Tensorflow library and from the numpy package.…”
Section: Numerical Examplesmentioning
confidence: 99%
See 1 more Smart Citation
“…The ELM type idea has also been developed for nonlinear problems; see e.g. [10,11] for solving stationary and time-dependent nonlinear PDEs in which the neural network is trained by a nonlinear least squares method. Following [11], we broadly refer to the artificial neural network-based methods exploiting these strategies as ELM type methods, including those employing neural networks with multiple hidden layers and those for nonlinear problems [49,51,10,11,17].…”
mentioning
confidence: 99%
“…The pursuit for higher accuracy and more competitive performance with neural networks for computational PDEs has led us in [10,11] to explore randomized neural networks (including ELM) [45,18]. Since optimizing the entire set of weight/bias coefficients in the neural network can be extremely hard and costly, perhaps randomly assigning and fixing a subset of the network's weights will make the resultant optimization task of network training simpler, and ideally linear, without severely sacrificing the achievable approximation capacity.…”
mentioning
confidence: 99%