2018
DOI: 10.30931/jetas.450252
|View full text |Cite
|
Sign up to set email alerts
|

Hessenberg Elm Autoencoder Kernel For Deep Learning

Abstract: Deep Learning (DL) is an effective way that reveals on computation capability and advantage of the hidden layer in the network models. It has pre-training phases which define the output parameters in unsupervised ways and supervised training for optimization of the pre-defined classification parameters. This study aims to perform high generalized fast training for DL algorithms with the simplicity advantage of Extreme Learning machines (ELM). The applications of the proposed classifier model were experimented … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 18 publications
0
10
0
Order By: Relevance
“…(2) Thus, a learning method for network called ELM can be described as Pseudo inverse methods, (Kutlu et al, 2015;Altan et al 2018;Kutlu et al, 2018).…”
Section: T H  mentioning
confidence: 99%
See 1 more Smart Citation
“…(2) Thus, a learning method for network called ELM can be described as Pseudo inverse methods, (Kutlu et al, 2015;Altan et al 2018;Kutlu et al, 2018).…”
Section: T H  mentioning
confidence: 99%
“…The fully connected layers work like the multilayer neural network in traditional machine learning. Instead of network learning various machine learning algorithms can be performed in this layer such as k-nearest Neighborhood, Support Vector Machines (Altan et al,2018;Altan et al, 2019;Camgözlü & Kutlu, 2020).…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…The most distinctive features of the DL are feature-learning approach and shared weights models. Although the shared weights are used at the learning stages with the emergence of feature learning, the DL model has many parameters that need to be optimized owing to the size of hidden layers of the model and the neurons at each layer [6,13,18].…”
Section: Deep Extreme Learning Machinesmentioning
confidence: 99%
“…The encoded final representation of input data was fed into the supervised ELM classifier [15,17,25]. -AE) is one of the most effective and simplistic encoding kernels [18,29]. Hessenberg decomposition is an inverse solution that expresses a matrix into a unitary matrix and a tri-diagonal symmetric matrix [30].…”
Section: Deep Extreme Learning Machinesmentioning
confidence: 99%
“…Ahmet extracted the features of lung sounds through empirical wavelet transform and then input them into many models to distinguish COPD patients from healthy subjects [ 22 ]. Altan et al used HHT to extract the features of lung sounds and fed the feature set into the proposed Deep ELM with HessELM-AE to distinguish COPD patients from healthy subjects, achieving an accuracy rate of 92.22% [ 23 ].…”
Section: Introductionmentioning
confidence: 99%