2021
DOI: 10.1007/s00354-021-00124-4
|View full text |Cite
|
Sign up to set email alerts
|

Ensemble Methods for Heart Disease Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(13 citation statements)
references
References 23 publications
0
11
0
1
Order By: Relevance
“…Another study [ 34 ] used ModifiedBoostARoota-CatBoost (MBAR-CB), CatBoost classifiers (CBC), ModifiedBoostARoota-XGBoost (MBAR-XGB), XGBoost Classifier (XGBC) for CHD, SHD, and SAHDD, respectively. The study [ 35 ] proposed an ensemble learning approach for heart disease prediction. The authors conducted experiments on the SHD and achieved an 0.88 accuracy score using the proposed ensemble model.…”
Section: Resultsmentioning
confidence: 99%
“…Another study [ 34 ] used ModifiedBoostARoota-CatBoost (MBAR-CB), CatBoost classifiers (CBC), ModifiedBoostARoota-XGBoost (MBAR-XGB), XGBoost Classifier (XGBC) for CHD, SHD, and SAHDD, respectively. The study [ 35 ] proposed an ensemble learning approach for heart disease prediction. The authors conducted experiments on the SHD and achieved an 0.88 accuracy score using the proposed ensemble model.…”
Section: Resultsmentioning
confidence: 99%
“…Variety of applications of the XGBoost have been cited in many application domains and the approach works well for smallto-medium sized structured data [30]. Boosting is a machine learning method that sequentially adds several different weak learners to realize a strong learner with higher computing power [31]. Weak learners are added repetitively by using a gradient descent algorithm until a strong learner with noticeable performance is achieved.…”
Section: Non-infectious Classmentioning
confidence: 99%
“…Feature selection is used to select the feature by a condition from the original feature that is relevant to the application. The algorithms intended for collecting features according to dissimilar evaluation measures fall into three classes: filters, wrappers, and hybrid models [9][10][11].…”
Section: Fs-feature Selectionmentioning
confidence: 99%