2015
DOI: 10.1007/s40314-015-0246-z
|View full text |Cite
|
Sign up to set email alerts
|

An analysis of numerical issues in neural training by pseudoinversion

Abstract: Some novel strategies have recently been proposed for single hidden layer neural network training that set randomly the weights from input to hidden layer, while weights from hidden to output layer are analytically determined by pseudoinversion. These techniques are gaining popularity in spite of their known numerical issues when singular and/or almost singular matrices are involved. In this paper we discuss a critical use of Singular Value Analysis for identification of these drawbacks and we propose an origi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…Introducing N sample training data examples (i.e., input-output pairs), the weights and biases can be computed in a supervised learning framework using either well established iterative back propagation methods [49] or pseudoinverse approaches [34]. As mentioned previously, the ANN architecture is trained by utilizing an ELM approach proposed in [33] for extremely fast training of an SLFN.…”
Section: A Extreme Learning Machinementioning
confidence: 99%
See 1 more Smart Citation
“…Introducing N sample training data examples (i.e., input-output pairs), the weights and biases can be computed in a supervised learning framework using either well established iterative back propagation methods [49] or pseudoinverse approaches [34]. As mentioned previously, the ANN architecture is trained by utilizing an ELM approach proposed in [33] for extremely fast training of an SLFN.…”
Section: A Extreme Learning Machinementioning
confidence: 99%
“…where λ controls the trade off between the least-squares error and the penalty term for regularization (e.g., see [34]). In the present study we set = 10 −12 .…”
Section: A Extreme Learning Machinementioning
confidence: 99%
“…Regularisation methods have to be used [51,52] to turn the original problem into a well-posed one, i.e. roughly speaking into a problem insensitive to small changes in initial conditions.…”
Section: Neural Model and Pseudo-inversion Based Trainingmentioning
confidence: 99%
“…We choose τ = 10 −12 for the present study which indicates the trade-off between the least-squares error and the penalty term for regularization [84]. The unknown weights can be calculated by using Equation (31).…”
Section: Artificial Neural Network Based Non-intrusive Reduced-order mentioning
confidence: 99%