2015
DOI: 10.3233/ida-150743
|View full text |Cite
|
Sign up to set email alerts
|

Effective algorithms of the Moore-Penrose inverse matrices for extreme learning machine

Abstract: Extreme learning machine (ELM) is a learning algorithm for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. After the input weights and the hidden layer biases are chosen randomly, ELM can be simply considered a linear system. However, the learning time of ELM is mainly spent on calculating the Moore-Penrose inverse matrices of the hidden layer output matrix. This paper focuses on effective computation of the Mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 97 publications
(30 citation statements)
references
References 25 publications
0
25
0
Order By: Relevance
“…Let us consider that A is an m × n matrix whose WSVD is provided by (4). Furthermore, assume that the starting value is given by (11). Thus, the matrix sequence from (15) tends to A † MN .…”
Section: Error Analysismentioning
confidence: 99%
“…Let us consider that A is an m × n matrix whose WSVD is provided by (4). Furthermore, assume that the starting value is given by (11). Thus, the matrix sequence from (15) tends to A † MN .…”
Section: Error Analysismentioning
confidence: 99%
“…For example, how to apply them in the real world? It includes the semantic explanation of approximate cognitive concepts, the assignment of the parameters k and l, the evaluation of the learnt granular concepts, and how to improve the learning efficiency [17]. Moreover, cognitive logic [26,50] should be incorporated into cognitive concept learning, and uncertainty [29] needs to be considered in incomplete information [29].…”
Section: Final Remarksmentioning
confidence: 99%
“…Tian et al [18, 19] used the Bagging Integrated Model and the modified AdaBoost RT to modify the conventional ELM, respectively. Lu et al [20] proposed several algorithms to reduce the computational cost of the Moore-Penrose inverse matrices for ELM. Zhang et al [21] introduced an incremental ELM which combines the deep feature extracting ability of Deep Learning Networks with the feature mapping ability of the ELM.…”
Section: Related Workmentioning
confidence: 99%