2009
DOI: 10.1109/tnn.2009.2024147
|View full text |Cite
|
Sign up to set email alerts
|

Error Minimized Extreme Learning Machine With Growth of Hidden Nodes and Incremental Learning

Abstract: One of the open problems in neural network research is how to automatically determine network architectures for given applications. In this brief, we propose a simple and efficient approach to automatically determine the number of hidden nodes in generalized single-hidden-layer feedforward networks (SLFNs) which need not be neural alike. This approach referred to as error minimized extreme learning machine (EM-ELM) can add random hidden nodes to SLFNs one by one or group by group (with varying group size). Dur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
38
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 578 publications
(53 citation statements)
references
References 22 publications
0
38
0
Order By: Relevance
“…The proposed adaptive OS-ELM based on improved ABC algorithm is compared with OS-ELM [26], EI-ELM [19], EM-ELM [11], and standard ELM [17]. The parameters of the proposed adaptive OS-ELM are as the follows: the data embedding dimension m is determined as 48, the number of neurons in the hidden It can be observed from Tab.…”
Section: Simulationmentioning
confidence: 99%
See 1 more Smart Citation
“…The proposed adaptive OS-ELM based on improved ABC algorithm is compared with OS-ELM [26], EI-ELM [19], EM-ELM [11], and standard ELM [17]. The parameters of the proposed adaptive OS-ELM are as the follows: the data embedding dimension m is determined as 48, the number of neurons in the hidden It can be observed from Tab.…”
Section: Simulationmentioning
confidence: 99%
“…OS-ELM first calculates its initial network weights in the initial training stage, and then the corresponding network weights can be obtained on the basis of the initial network weights when a new training sample is added to the training sample set. However, OS-ELM and other improved ELM algorithms (I-ELM [18], EI-ELM [19], EM-ELM [11], and etc) believe that the importance of new and old training samples is the same, giving the same weight to old and new samples, and failing to highlight the role of new training samples. Moreover, as long as new training samples are obtained, OS-ELM updates the network weights recursively.…”
Section: Introductionmentioning
confidence: 99%
“…A new algorithm of the I-ELM, called error minimized ELM (EM-ELM) is proposed at [12]. This network can add nodes one by one or group by group (Chunking).…”
Section: Incremental Based Extreme Learning Machinesmentioning
confidence: 99%
“…Lemma 1 [12]. Given an SLFN, let H 1 be the initial hidden layer output matrix with L 0 hidden nodes.…”
Section: Incremental Based Extreme Learning Machinesmentioning
confidence: 99%
“…Theoretical proofs and a more thorough presentation of the ELM algorithm are detailed in the original paper in which Huang et al present the algorithm and its justifications [2]. Furthermore, the hidden nodes need not be 'neuron-alike' [10][11][12].…”
Section: Extreme Learning Machine (Elm)mentioning
confidence: 99%