2022
DOI: 10.1016/j.health.2021.100010
|View full text |Cite
|
Sign up to set email alerts
|

Meta-Health Stack: A new approach for breast cancer prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 26 publications
0
9
0
Order By: Relevance
“…RF and gradient boosting combine the results of a DT for better prediction. Also, gradient boosting is ensemble tree-based methods applying the principle of gradient descent [ 47 ]. The RF divides the data into some random subset and trains them in parallel, ultimately using the majority of votes for the final prediction.…”
Section: Methodsmentioning
confidence: 99%
“…RF and gradient boosting combine the results of a DT for better prediction. Also, gradient boosting is ensemble tree-based methods applying the principle of gradient descent [ 47 ]. The RF divides the data into some random subset and trains them in parallel, ultimately using the majority of votes for the final prediction.…”
Section: Methodsmentioning
confidence: 99%
“…Just like gene expression datasets, different strategies of feature reduction have been utilized to enhance breast cancer classification accuracy based on WDBC dataset. Multiple feature selection methods have been used to build Meta Health Stack by Samieinasab et al [ 47 ]. In this model, the Extra Trees classifier is used to combine features resulting from the Variance Inflation Factor, Pearson’s Correlation and Information Gain.…”
Section: Related Workmentioning
confidence: 99%
“…[ 26 ] 2022 4 - SMOTE Resampling + L2-SVM Kakati et al [ 27 ] 2022 17 - Transfer learning + CNN Dai et al [ 28 ] 2021 3 - ERGCN Single Stage Feature Selection Mohammed et al [ 29 ] 2021 5 Lasso Staking Ensemble of CNN Menaga et al. [ 30 ] 2021 2 Wrapper Fractional-ASO Deep RNN Al Mamun et al [ 31 ] 2021 12 mrCAE - Multiple Stages Feature Selection Majumder et al [ 33 ] 2022 4 ANOVA, IG MLP, 1DCNN, 2DCNN Saberi-Movahed et al [ 34 ] 2022 9 DR-FS-MFMR = Matrix Factorization + Minimum Redundancy Unsupervised clustering Bustamam et al [ 35 ] 2021 2 SVM-RFE + ABC SVM Samieinasab et al [ 47 ] 2022 1 Ensemble (Variance Inflation Factor, Pearson’s Correlation, Information G...…”
Section: Related Workmentioning
confidence: 99%
“…The meta learner learns the combination weights for all base level decision probabilities, and classifies instances. For a stack ensemble to perform well, it is important to promote information gain of features used for training the meta learner through level 0 base learners [19,20]. The proposed framework is motivated by this rationale.…”
Section: B Proposed Framework Ss-ilmentioning
confidence: 99%