2016
DOI: 10.5391/ijfis.2016.16.2.125
|View full text |Cite
|
Sign up to set email alerts
|

Pseudoinverse Matrix Decomposition Based Incremental Extreme Learning Machine with Growth of Hidden Nodes

Abstract: The proposal of this study is a fast version of the conventional extreme learning machine (ELM), called pseudoinverse matrix decomposition based incremental ELM (PDI-ELM). One of the main problems in ELM is to determine the number of hidden nodes. In this study, the number of hidden nodes is automatically determined. The proposed model is an incremental version of ELM which adds neurons with the goal of minimization the error of the ELM network. To speed up the model the information of pseudoinverse from previ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…Over the last few years, machine learning methods have successfully solved the problem of finding hidden patterns or structures in data [ 13 , 14 ]. Several methods for ALL detection have been proposed in the literature.…”
Section: Introductionmentioning
confidence: 99%
“…Over the last few years, machine learning methods have successfully solved the problem of finding hidden patterns or structures in data [ 13 , 14 ]. Several methods for ALL detection have been proposed in the literature.…”
Section: Introductionmentioning
confidence: 99%
“…The basic RVFLN algorithm inverse matrix can be calculated by the online method with the help of the Moore-Penrose pseudo inverse matrix method and it is good for online models, but it is not suitable for offline models. 33 Various adapted working conditions are presented in the online model, due to this cause, online sequential algorithm is preferred for the improvement of the accuracy. The proposed DC ring microgrid model produces huge data, and OS-RVFLN is more suitable 34 for demonstration.…”
Section: Os-rvfln Algorithm With Forgetting Factormentioning
confidence: 99%
“…Various efforts have been made to explore the relations between the approximation ability and the number of nodes of some specific neural network, such as singlehidden-layer feedforward neural networks (SLFNs), and two-hidden-layer feedforward neural networks with specific or conditional activation functions [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28]. For example, it was proved that N arbitrary distinct samples can be learned precisely by standard SLFNs with N hidden neurons (including biases) and the signum activation function in [12].…”
Section: Introductionmentioning
confidence: 99%
“…Later, Huang [17] proved that if the number of hidden nodes is equal to the number of distinct training samples, SLFNs with random input weight vectors and hidden biases can approximate the training samples with zero error. Furthermore, it was proved by [18][19][20] that for SLFNs, the approximation error is monotonically decreasing with gradually adding nodes in hidden layer.…”
Section: Introductionmentioning
confidence: 99%