2017
DOI: 10.1111/coin.12136
|View full text |Cite
|
Sign up to set email alerts
|

Big data regression with parallel enhanced and convex incremental extreme learning machines

Abstract: This work considers scalable incremental extreme learning machine (I-ELM) algorithms, which could be suitable for big data regression. During the training of I-ELMs, the hidden neurons are presented one by one, and the weights are based solely on simple direct summations, which can be most efficiently mapped on parallel environments. Existing incremental versions of ELMs are the I-ELM, enhanced incremental ELM (EI-ELM), and convex incremental ELM (CI-ELM). We study the enhanced and convex incremental ELM (ECI-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 28 publications
(68 reference statements)
0
2
0
Order By: Relevance
“…In addition to ensemble and matrix operations, there are other ways to implement distributed ELM. For example, iteration acceleration [82], using acceleration package on MATLAB [83], GPU acceleration [84], [85], using online sequential to realize distributed [86], and so on. Different from He [55], Xin [42].…”
Section: Othersmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition to ensemble and matrix operations, there are other ways to implement distributed ELM. For example, iteration acceleration [82], using acceleration package on MATLAB [83], GPU acceleration [84], [85], using online sequential to realize distributed [86], and so on. Different from He [55], Xin [42].…”
Section: Othersmentioning
confidence: 99%
“…Different from He [55], Xin [42]. Kokkinos [82] et al conducted the incremental version of ELM. Incremental ELM does not use direct matrix-matrix multiplicators, instead of adding neurons one by one, using each neuron to transmit one data, for direct parallelization.…”
Section: Othersmentioning
confidence: 99%