2016
DOI: 10.1016/j.neunet.2015.09.003
|View full text |Cite
|
Sign up to set email alerts
|

A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics

Abstract: Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 35 publications
(17 citation statements)
references
References 64 publications
0
16
0
Order By: Relevance
“…Although Deng et al (2016) proposed this method for a 3-layer network, it was implemented in this work regardless the number of hidden layers.…”
Section: Svdmentioning
confidence: 99%
See 1 more Smart Citation
“…Although Deng et al (2016) proposed this method for a 3-layer network, it was implemented in this work regardless the number of hidden layers.…”
Section: Svdmentioning
confidence: 99%
“…Based on Deng et al (2016), this scheme is an alternative version of the former SVD. Now, training data is split into min{Qb, Pt} chunks (or subsets) of equal size Pti = max{floor(Pt / Qb), 1} -"floor" rounds the argument to the previous integer (whenever it is decimal) or yields the argument itself, being each chunk aimed to derive Qbi = 1 hidden node.…”
Section: Mini-batch Svdmentioning
confidence: 99%
“…Although Deng et al [45] proposed this method for a 3-layer network, it was implemented in this work regardless the number of hidden layers.…”
Section: Svdmentioning
confidence: 99%
“…Based on [45], this scheme is an alternative version of the former SVD. Now, training data is split into min {Q b , P t } chunks (or subsets) of equal size P ti = max {floor(P t / Q b ), 1}floor rounds the argument to the previous integer (whenever it is decimal) or yields the argument itself, being each chunk aimed to derive Q bi = 1 hidden node.…”
Section: Mini-batch Svdmentioning
confidence: 99%
“…Just like the traditional ELM [21][22][23], ELM-AE contains three layers: input layer, hidden layer, and output layer. The difference is that the target output is the same as the input in ELM-AE.…”
Section: Elm-aementioning
confidence: 99%