2019
DOI: 10.1007/s00521-019-04303-9
|View full text |Cite
|
Sign up to set email alerts
|

Extreme learning machine with autoencoding receptive fields for image classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 21 publications
1
15
0
Order By: Relevance
“…en, all the related parameters of these hidden layers can be obtained after the (M-2)-iteration operation. To make the final actual hidden layer output close to the expected hidden layer output, in the training stage, the MELM designs a novel parameter-setting step for the newly added hidden layer, as described in (8), which is the key step to guarantee the stability and feasibility of the MELM model.…”
Section: Multilayered Elm (Melm)mentioning
confidence: 99%
See 1 more Smart Citation
“…en, all the related parameters of these hidden layers can be obtained after the (M-2)-iteration operation. To make the final actual hidden layer output close to the expected hidden layer output, in the training stage, the MELM designs a novel parameter-setting step for the newly added hidden layer, as described in (8), which is the key step to guarantee the stability and feasibility of the MELM model.…”
Section: Multilayered Elm (Melm)mentioning
confidence: 99%
“…In the ELM algorithm, the input weights and hidden biases are randomly generated from any continuous probability distribution, and then, the output weights can be solved using the generalized Moore-Penrose inverse. Compared with the BP neural network, this algorithm has a good performance network in regression [4][5][6], classification [7][8][9], feature learning [10][11][12], and cluster tasks [13][14][15]. Different from conventional gradient-based neural network learning algorithms, which are sensitive to the combination of parameters and easy to trap in local optimum, ELM not only has a faster training speed but also has a smaller training error.…”
Section: Introductionmentioning
confidence: 99%
“…The multilayer ELM-LRF is another known ELM-LRF variation which consists of multiple convolution and pooling layers [67], [27], [38], [51], [114], and [77].…”
Section: Random Filtermentioning
confidence: 99%
“…Autoencoding ELM-LRF proposes high-level feature representation using ELM-AE with ELM-LRF and is proposed by [27], [28], and [93]. Another notable difference is using three ELM-AE used in parallel for each respective colour channel for coding features.…”
Section: Random Filtermentioning
confidence: 99%
See 1 more Smart Citation