2019 7th Mediterranean Congress of Telecommunications (CMT) 2019
DOI: 10.1109/cmt.2019.8931411
|View full text |Cite
|
Sign up to set email alerts
|

Identification of Cardiovascular Diseases Using Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 47 publications
(24 citation statements)
references
References 6 publications
0
17
0
Order By: Relevance
“…The results show that the accuracy of LR reaches 85.86%, which is better than XGBoost, which has an accuracy of 84.46%. Louridi et al [ 5 ] used the UCI machine learning repository and compared three methods: SVM, k-Nearest Neighbor (kNN), and NB. Experiments show that the SVM with linear kernel has the best effect, with an accuracy of 86.8%.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The results show that the accuracy of LR reaches 85.86%, which is better than XGBoost, which has an accuracy of 84.46%. Louridi et al [ 5 ] used the UCI machine learning repository and compared three methods: SVM, k-Nearest Neighbor (kNN), and NB. Experiments show that the SVM with linear kernel has the best effect, with an accuracy of 86.8%.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Setiawan et al [22] implemented ANN with Rough Set Theory (RST) ,(ANN-RST) attribute reduction to predict the real missing attribute values on heart disease data, this method outperforms well in comparison to other techniques such as ANN, Piecewise Linear Network-Orthonormal Least Square feature selection (PLN-OLS), and KNN. Louridi et al [23]. Suggest filling up missing values with the mean value instead of ignoring them, this approach gives better results with SVM in classification (Fig.…”
Section: Related Workmentioning
confidence: 98%
“…N. Louridi et al [30] proposed a solution to identify the presence/absence of heart disease by replacing missing values with the mean values during pre-processing. They trained three machine learning algorithms, namely, NB, SVM (linear and radial basis func-tion), and KNN, by splitting the Cleveland dataset of 303 instances and 13 attributes into 50:50, 70:30, 75:25, and 80:20 training and testing ratios.…”
Section: Related Workmentioning
confidence: 99%
“…From the experimental works, it is understood that data pre-processing and feature selection can substantially enhance the classification accuracy of machine learning algorithms. During pre-processing, most researchers [18,19,21,22,26,[29][30][31][32] replaced the missing values, either by using the mean value or the majority mark of that attribute, to make sure the dataset was comprehensive. In some works [20,24,25,27], the missing valued instances were removed.…”
Section: Related Workmentioning
confidence: 99%