2021
DOI: 10.48550/arxiv.2110.14121
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Computing the Hyperparameter of Extreme Learning Machines: Algorithm and Application to Computational PDEs, and Comparison with Classical and High-Order Finite Elements

Suchuan Dong,
Jielin Yang

Abstract: We consider the use of extreme learning machines (ELM) for computational partial differential equations (PDE). In ELM the hidden-layer coefficients in the neural network are assigned to random values generated on [−Rm, Rm] and fixed, where Rm is a user-provided constant, and the output-layer coefficients are trained by a linear or nonlinear least squares computation. We present a method for computing the optimal or near-optimal value of Rm based on the differential evolution algorithm. This method seeks the op… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
55
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(58 citation statements)
references
References 48 publications
(128 reference statements)
3
55
0
Order By: Relevance
“…We observe that, for smooth field solutions, the VarPro errors decrease exponentially as the number of collocation points or the number of output-layer coefficients in the neural network increases, which is reminiscent of the spectral convergence of traditional high-order methods [32,60,69,16,12,11,38,66,67]. We also compare extensively the performance of the current VarPro method with that of the ELM method from [13,17]. The numerical results show that, under identical conditions and network configurations, the VarPro method is considerably more accurate than the ELM method, especially when the size of the neural network is small.…”
Section: Introductionmentioning
confidence: 85%
See 4 more Smart Citations
“…We observe that, for smooth field solutions, the VarPro errors decrease exponentially as the number of collocation points or the number of output-layer coefficients in the neural network increases, which is reminiscent of the spectral convergence of traditional high-order methods [32,60,69,16,12,11,38,66,67]. We also compare extensively the performance of the current VarPro method with that of the ELM method from [13,17]. The numerical results show that, under identical conditions and network configurations, the VarPro method is considerably more accurate than the ELM method, especially when the size of the neural network is small.…”
Section: Introductionmentioning
confidence: 85%
“…Remark 2.6. It would be interesting to compare the current VarPro method with the extreme learning machine (ELM) method from [13,17] for solving PDEs. With ELM, the weight/bias coefficients in all the hidden layers of the neural network are pre-set to random values and are fixed, while the output-layer coefficients are computed by the linear least squares method for solving linear PDEs and by the nonlinear least squares method for solving nonlinear PDEs [13].…”
Section: Variable Projection Methods For Solving Linear Pdesmentioning
confidence: 99%
See 3 more Smart Citations