2012
DOI: 10.1016/j.neunet.2011.08.005
|View full text |Cite
|
Sign up to set email alerts
|

Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks

Abstract: Recently, error minimized extreme learning machines (EM-ELMs) have been proposed as a simple and efficient approach to build single-hidden-layer feedforward networks (SLFNs) sequentially. They add random hidden nodes one by one (or group by group) and update the output weights incrementally to minimize the sum-of-squares error in the training set. Other very similar methods that also construct SLFNs sequentially had been reported earlier with the main difference that their hidden-layer weights are a subset of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
21
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(21 citation statements)
references
References 19 publications
(27 reference statements)
0
21
0
Order By: Relevance
“…Algorithm 2 is mainly motivated by the work in [23], where EM-ELMs [16] is compared with SV-SFNNs [17]. EMELMs is a constructive version of ELM which results in a faster method with similar generalization performance.…”
Section: Extreme Learning Machines With the Input Strategymentioning
confidence: 99%
See 3 more Smart Citations
“…Algorithm 2 is mainly motivated by the work in [23], where EM-ELMs [16] is compared with SV-SFNNs [17]. EMELMs is a constructive version of ELM which results in a faster method with similar generalization performance.…”
Section: Extreme Learning Machines With the Input Strategymentioning
confidence: 99%
“…The main conclusion in [23] is that SV-SFNNs outperform EM-ELMs, indicating that the strategy of selecting the hiddenlayer weights as a subset of the input data may be better than the random selection made by EM-ELMs. The experiments in this work are devoted to see if that conclusion also holds for the original ELMs.…”
Section: Extreme Learning Machines With the Input Strategymentioning
confidence: 99%
See 2 more Smart Citations
“…It should also be noticed that approximation error is not at all guaranteed to be close to zero for every randomly chosen set of hidden nodes. Recently, Romero [20] showed that support vector sequential feedforward neural networks have better generalization performance than error minimized extreme learning machines, which build single-hidden-layer feedforward networks sequentially. Fortunately, the relatively fast con-vergence rate and small approximation error can be guaranteed if the number of hidden nodes is large enough, which is meaningful to large-scale data sets.…”
Section: Introductionmentioning
confidence: 99%