2010
DOI: 10.1016/j.neucom.2010.02.019
|View full text |Cite
|
Sign up to set email alerts
|

Optimization method based extreme learning machine for classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
348
0
4

Year Published

2012
2012
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 800 publications
(380 citation statements)
references
References 16 publications
3
348
0
4
Order By: Relevance
“…According to [25,26,32], the non-trained random weights, in combination with the MSE optimization criterion, reduces the chance of overfitting the training data in RCNs and ELMs and hence, let the system generalize better to noisy data than a system with fully trained parameters. In what follows, we distinguish two experimental settings: one in which the system is trained on clean images only (clean training) and one in which the system is trained on a mix of clean samples and samples corrupted by the five noise types that are also present in the test set (multi-conditional training).…”
Section: Recognition Of Noisy Imagesmentioning
confidence: 99%
“…According to [25,26,32], the non-trained random weights, in combination with the MSE optimization criterion, reduces the chance of overfitting the training data in RCNs and ELMs and hence, let the system generalize better to noisy data than a system with fully trained parameters. In what follows, we distinguish two experimental settings: one in which the system is trained on clean images only (clean training) and one in which the system is trained on a mix of clean samples and samples corrupted by the five noise types that are also present in the test set (multi-conditional training).…”
Section: Recognition Of Noisy Imagesmentioning
confidence: 99%
“…Compared with traditional popular gradient-based learning algorithms for SLFNs, ELM not only learns much faster with higher generalization ability but also avoids many difficulties associated with stopping criteria, learning rates, learning epochs, and local minima. The ELM model has been used in many areas, such as time series prediction [44,45], image quality assessment [46], classification [47,48], face recognition [49,50] and others. The structure of ELM is illustrated in Fig.…”
Section: Principle Of Extreme Learning Machinementioning
confidence: 99%
“…ELM for classification is easily implemented and less sensitive to specified parameters [32]. In ELM, the hidden nodes of the single hidden layer feedforward networks (SLFNs) can be randomly generated and the output weights of SLFNs are analytically determined [33]. Given N distinct samples(x k ,t k ), where…”
Section: Classifiermentioning
confidence: 99%