2016
DOI: 10.1016/j.neucom.2015.11.009
|View full text |Cite
|
Sign up to set email alerts
|

Two-hidden-layer extreme learning machine for regression and classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
39
0
1

Year Published

2017
2017
2019
2019

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 96 publications
(40 citation statements)
references
References 28 publications
0
39
0
1
Order By: Relevance
“…Now the TELM defines the matrix = [ 1 1 ], so the parameters of the second hidden layer can be easily obtained by using formula (12) and the inverse function of the activation function.…”
Section: Two-hidden-layer Elmmentioning
confidence: 99%
See 1 more Smart Citation
“…Now the TELM defines the matrix = [ 1 1 ], so the parameters of the second hidden layer can be easily obtained by using formula (12) and the inverse function of the activation function.…”
Section: Two-hidden-layer Elmmentioning
confidence: 99%
“…But ELM only needs to randomly set the weights and bias of the hidden neurons, and the output weights are determined by using the Moore-Penrose pseudoinverse under the criterion of least-squares method. In recent years, various ELM variants have been proposed aiming to achieve better achievements, such as the deep ELM with kernel based on Multilayer Extreme Learning Machine (DELM) algorithm [11]; two-hidden-layer extreme learning machine (TELM) [12]; a Four-Layered Feedforward Neural Network [13]; online sequential extreme learning machine [14,15]; multiple kernel extreme learning machine (MK-ELM) [16]; two-stage extreme learning machine [17], using noise detection and improving the classifier accuracy [18,19].…”
Section: Introductionmentioning
confidence: 99%
“…In order to improve the embarrassing situation of RELM, the literature [17,18] starts from improving its network structure. On the basis of the traditional RELM threelayer structure, the number of hidden layers is increased to form a neural network with one input layer, multiple hidden layers, and one output layer, that is, the multiple hidden layers RELM network model (MRELM), in which the neuron nodes of each hidden layer are fully connected.…”
Section: Introductionmentioning
confidence: 99%
“…This approach maintains testing accuracy with stable condition, but it was only limited to ELM with linear hidden neurons. In recent years, many studies have focused on the structural improvements of ELM, such as ELM with double hidden layers [14] and ELM with multiple hidden layers [15,16]. The spectral data of haematite is a set of highly coupled and nonlinear matrices.…”
Section: Introductionmentioning
confidence: 99%