2020
DOI: 10.1007/s10462-020-09914-6
|View full text |Cite
|
Sign up to set email alerts
|

A systematic mapping study for ensemble classification methods in cardiovascular disease

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 118 publications
0
8
0
Order By: Relevance
“…[38][39][40] The inclusion of high-performing and more diverse models such as Neural Network, and Xgboost may have contributed to the reduction of high variance and bias issues, which could potentially be detrimental to discrimination, calibration, clinical utility and overall accuracy across datasets and time. [41,42] For example, the logES-ESII-A ensemble substantially exceeded the performance reported in a small sized study that used an ensemble of GBM, RF, Support vector machine and Naïve Bayes, built using logES, ESII and other clinical variables without temporal consideration of variables, to predict cardiac postoperative mortality (AUC = 0.832 vs 0.795). [16] However, a smaller sized study that included Xgboost as part of a heterogeneous set of Super Learner ensemble did not achieve high performance using pre-operative data compared to this study's logES-ESII-A ensemble (AUC = 0.832 vs. 0.718 [0.687-0.749]), [43] or homogeneous Xgboost and RF (logES-ESII-P) ensembles (AUC = 0.832).…”
Section: Discussionmentioning
confidence: 98%
See 1 more Smart Citation
“…[38][39][40] The inclusion of high-performing and more diverse models such as Neural Network, and Xgboost may have contributed to the reduction of high variance and bias issues, which could potentially be detrimental to discrimination, calibration, clinical utility and overall accuracy across datasets and time. [41,42] For example, the logES-ESII-A ensemble substantially exceeded the performance reported in a small sized study that used an ensemble of GBM, RF, Support vector machine and Naïve Bayes, built using logES, ESII and other clinical variables without temporal consideration of variables, to predict cardiac postoperative mortality (AUC = 0.832 vs 0.795). [16] However, a smaller sized study that included Xgboost as part of a heterogeneous set of Super Learner ensemble did not achieve high performance using pre-operative data compared to this study's logES-ESII-A ensemble (AUC = 0.832 vs. 0.718 [0.687-0.749]), [43] or homogeneous Xgboost and RF (logES-ESII-P) ensembles (AUC = 0.832).…”
Section: Discussionmentioning
confidence: 98%
“…All models were evaluated using the applied Holdout dataset from the years 2017-2019. [41] Geometric average was used for all soft-voting transformations to bring probability distribution of base learners into one ensemble distribution. [59] Details of base learner model speci cation are provided in Supplementary Materials, Section 2.…”
Section: Ensemble Modellingmentioning
confidence: 99%
“…Instead, all models were evaluated using the Holdout dataset from the years 2017 to 2019 that were not part of the training process with performances compared to similar studies. 66 …”
Section: Methodsmentioning
confidence: 99%
“…However, this result is not adequate and recognition of smaller class cases is much desirable than larger class cases. Class imbalance problem can be experienced in fraud detection [16], risk management [36,37], health care [38,39], software quality assurance [40][41][42], sentiment classification [43] and abstract classification [1], etc. The machine learning community has proposed various methods to address the requirements of class imbalance issue.…”
Section: Introductionmentioning
confidence: 99%