Proceedings of the 22nd International Conference on Enterprise Information Systems 2020
DOI: 10.5220/0009397605400547
|View full text |Cite
|
Sign up to set email alerts
|

An Effective Sparse Autoencoders based Deep Learning Framework for fMRI Scans Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…Autoencoders are applied for feature reduction and feature learning [8][9][10][11] . Many studies 8,10 combine autoencoders with supervised and unsupervised learning to realize better feature extraction and classification performance.…”
Section: Models Based On Autoencodermentioning
confidence: 99%
See 1 more Smart Citation
“…Autoencoders are applied for feature reduction and feature learning [8][9][10][11] . Many studies 8,10 combine autoencoders with supervised and unsupervised learning to realize better feature extraction and classification performance.…”
Section: Models Based On Autoencodermentioning
confidence: 99%
“…Heinsfeld et al 9 adopted stacked denoising autoencoders to extract features implemented in the model's unsupervised and supervised stages. Mahmoud et al 10 also used an unsupervised algorithm, applying two sparse autoencoders to extract features, while an additional autoencoder was employed as a supervised classifier. Both sets of experiments implement data preprocessing in an unsupervised stage.…”
Section: Models Based On Autoencodermentioning
confidence: 99%
“…Abeer, et al, in [2], proposed a hybrid unsupervised and supervised framework that is based on the sparse autoencoders controlling and summarization of features and obtained better results than literature on same data domain. Xi.…”
Section: State Of the Artmentioning
confidence: 99%
“…The encoder f (Ie) function is producing an abstracted version of the input Ie by a hidden layer he; while the function of the decoder (Oe(f(Ie)) is the reconstruction process of the original input from the layer he with a minim loss function: L (Ie, Oe(f(Ie))). The contractive autoencoder [31] used tiny derivatives and sum to an explicit regularize β(he) to the hidden layer he and control the encoder to minimize the regularize in (2). This forced the model to discover any slight variations of input values (ß(he): is the squared Frobenius norm [30] of the Jacobian partial derivative matrix, α: is a free parameter) ß (ℎ ) =α ( )…”
Section: Contractive Autoencodermentioning
confidence: 99%
See 1 more Smart Citation