2017 IEEE International Conference on Image Processing (ICIP) 2017
DOI: 10.1109/icip.2017.8296491
|View full text |Cite
|
Sign up to set email alerts
|

Multi layer multi objective extreme learning machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 12 publications
0
8
0
Order By: Relevance
“…where W in ∈ R N R ×N U is the input weight table,Ŵ (l) ∈ R N R ×N R is the recurrent weight table for level l, W (l) ∈ R N R ×N R is the table relative to the connection weights between the levels from level l-1 to level l, a (l) is the leaky parameter at level l, and tanh represents the elementary application of the tangent [55][56][57][58].…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…where W in ∈ R N R ×N U is the input weight table,Ŵ (l) ∈ R N R ×N R is the recurrent weight table for level l, W (l) ∈ R N R ×N R is the table relative to the connection weights between the levels from level l-1 to level l, a (l) is the leaky parameter at level l, and tanh represents the elementary application of the tangent [55][56][57][58].…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…e random weights generate approximately rectangular and weakly correlated features at the hidden layer which offers an accurate solution and high generalization abilities. More specifically, the output of the proposed S-hFFNN with random hidden neurons in the hidden layer can be represented as follows [26][27][28]:…”
Section: The Proposed Systemmentioning
confidence: 99%
“…For example, in the study by Esfe et al (2015) in predicting thermal conductivity, they found that a network with two hidden layers and five neurons in each layer had the lowest error and highest fitting coefficient for 30 datasets. Moreover, in the study by Lekamalage, Song, Huang, Cui, and Liang (2017) in experimenting with the image classification of 24300 samples, there were two hidden layers with the first hidden node consisting of 200 nodes, while the second hidden layer which consisted of 3000 nodes was found to have better testing accuracy. Besides, in a study by Zhang (2017) in recognizing and predicting mental disease based on 10000 datasets, the best mean square error value was recorded when the number of hidden nodes increased from eight to 16 nodes.…”
Section: Literature Reviewmentioning
confidence: 99%