2012
DOI: 10.1007/978-3-642-34500-5_15
|View full text |Cite
|
Sign up to set email alerts
|

Matrix Pseudoinversion for Image Neural Processing

Abstract: Abstract.Recently some novel strategies have been proposed for training of Single Hidden Layer Feedforward Networks, that set randomly the weights from input to hidden layer, while weights from hidden to output layer are analytically determined by Moore-Penrose generalised inverse. Such non-iterative strategies are appealing since they allow fast learning, but some care may be required to achieve good results, mainly concerning the procedure used for matrix pseudoinversion. This paper proposes a novel approach… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
4
0

Year Published

2015
2015
2015
2015

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 10 publications
2
4
0
Order By: Relevance
“…When approaching critical size, inversion of singular values causes a wrong evaluation of H + and therefore a significant growth in the error; when the critical dimension is reached, singular values under threshold are automatically removed, thus allowing the subsequent decrease of error. The same trend was detected analysing the astronomical dataset in (Cancelliere et al, 2012).…”
Section: Experiments and Resultssupporting
confidence: 80%
See 4 more Smart Citations
“…When approaching critical size, inversion of singular values causes a wrong evaluation of H + and therefore a significant growth in the error; when the critical dimension is reached, singular values under threshold are automatically removed, thus allowing the subsequent decrease of error. The same trend was detected analysing the astronomical dataset in (Cancelliere et al, 2012).…”
Section: Experiments and Resultssupporting
confidence: 80%
“…Some numerical instability issues have already been evidenced in our previous investigations (Cancelliere et al, 2012); we provided suggestions on possible mitigation techniques like selection of a convenient activation function and normalisation of the input weights. Hereafter we show that adding regularisation to the implementation prescriptions already analysed provides a convenient and effective approach to deal with such problem.…”
Section: Experiments and Resultsmentioning
confidence: 88%
See 3 more Smart Citations