2014
DOI: 10.1007/s13042-014-0292-7
|View full text |Cite
|
Sign up to set email alerts
|

Two-stage extreme learning machine for high-dimensional data

Abstract: Extreme learning machine (ELM) has been proposed for solving fast supervised learning problems by applying random computational nodes in the hidden layer. Similar to support vector machine, ELM cannot handle high-dimensional data effectively. Its generalization performance tends to become bad when it deals with highdimensional data. In order to exploit high-dimensional data effectively, a two-stage extreme learning machine model is established. In the first stage, we incorporate ELM into the spectral regressio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(8 citation statements)
references
References 19 publications
0
8
0
Order By: Relevance
“…Kumar and Minz (2014) pointed out a number of related works which have focused on comparing several feature selection methods for different domain problems. Others have proposed novel feature selection algorithms featuring filter (Lim, Lee, & Kim, 2017; Wang, Wei, Yang, & Wang, 2017), wrapper (Das, Das, & Ghosh, 2017; Zhang et al, 2016), and embedded (Liu, Huang, Meng, Gong, & Zhang, 2016; Zhu, Zhu, Hu, Zhang, & Zuo, 2017) techniques. However, many of these novel algorithms have been developed based only on one type of selection technique, filter, wrapper, or embedded feature selection processes.…”
Section: Introductionmentioning
confidence: 99%
“…Kumar and Minz (2014) pointed out a number of related works which have focused on comparing several feature selection methods for different domain problems. Others have proposed novel feature selection algorithms featuring filter (Lim, Lee, & Kim, 2017; Wang, Wei, Yang, & Wang, 2017), wrapper (Das, Das, & Ghosh, 2017; Zhang et al, 2016), and embedded (Liu, Huang, Meng, Gong, & Zhang, 2016; Zhu, Zhu, Hu, Zhang, & Zuo, 2017) techniques. However, many of these novel algorithms have been developed based only on one type of selection technique, filter, wrapper, or embedded feature selection processes.…”
Section: Introductionmentioning
confidence: 99%
“…ELM is now modified for big data. Liu and Huang [98] used ELM to reduce the dimension of high-dimensional data by spectral regression. Then, the output weight can be obtained.…”
Section: Elm For Big Datamentioning
confidence: 99%
“…The proposed algorithm is simple and effective for binary classification. Liu and Huang [98] [198] Elastic ELM was proposed for big data learning.…”
Section: Elm Vs Support Vector Machine (Svm)mentioning
confidence: 99%
“…Since the classification ability is quantified by the generalizaiton error, we will attempt to develop a convergence bound of the generalization error of HSIC-FMKL based on the established theory of Rademacher complexities. Besides, expanding the proposed model to multiple kernel clustering [69], extreme learning machine [70] and domain transfer learning [71], as well as adding more complicated complicated kernels such as the Chebyshev kernel and Hermite kernel [72] into the pool of base kernels for MKL are also important issues to be investigated.…”
Section: Conclusion and Further Studymentioning
confidence: 99%