2020
DOI: 10.1007/s12652-020-01832-3
|View full text |Cite
|
Sign up to set email alerts
|

High-dimensional microarray dataset classification using an improved adam optimizer (iAdam)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 45 publications
(22 citation statements)
references
References 46 publications
1
18
0
Order By: Relevance
“…Our deep learning network parameters are initialized with Xavier initialization [ 54 ]. Adam optimizer [ 55 ] with learning rate of 3e-4 and 24 as our mini-batch size are used to minimize the network loss function from Eq (1) . The learning rate is gradually decayed to 25e-6 during the training process.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Our deep learning network parameters are initialized with Xavier initialization [ 54 ]. Adam optimizer [ 55 ] with learning rate of 3e-4 and 24 as our mini-batch size are used to minimize the network loss function from Eq (1) . The learning rate is gradually decayed to 25e-6 during the training process.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…e effective input vector is then passed to the fully connected layer, which functions similarly to the MLP. In the final section of the deep convolution layers, Softmax [17], classification layers perform classification using ADAM (adaptive moment optimizer) [18]; the lost function is shown in the following equation:…”
Section: Deep Convolutionmentioning
confidence: 99%
“…The OSCC data used in this research is accumulated from the National Center for Biotechnology Information (NCBI). 52 The rest of the dataset is collected from http://csse.szu.edu.cn/staff/ahuzx/Datasets.html. All six datasets used in this research have two labels in their target variable.…”
Section: Dataset Descriptionmentioning
confidence: 99%