2019 International Conference on Intelligent Computing and Control Systems (ICCS) 2019
DOI: 10.1109/iccs45141.2019.9065877
|View full text |Cite
|
Sign up to set email alerts
|

Improved Framework for Breast Cancer Prediction Using Frequent Itemsets Mining for Attributes Filtering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…Frequent item-set mining is used to select the essential features in patients' datasets. The decision tree, Naive Bayes (NB), k-Nearest Neighbors (k-NN), and Support Vector Machine (SVM) are compared, and it is found that SVM outperforms other models ( 23 ). A research paper focused on reducing erroneous prediction results that is false positive and false negative.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Frequent item-set mining is used to select the essential features in patients' datasets. The decision tree, Naive Bayes (NB), k-Nearest Neighbors (k-NN), and Support Vector Machine (SVM) are compared, and it is found that SVM outperforms other models ( 23 ). A research paper focused on reducing erroneous prediction results that is false positive and false negative.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Sinha et al [47] introduced attribute filtering strategies, such as frequent itemsets mining, to identify the most important and applicable attribute from the Wisconsin BC dataset using a classification algorithmic such as SVM. Attribute filtering was used to compare NB, K-NN, and DT.…”
Section: Related Workmentioning
confidence: 99%
“…Comparing with the previous works, the provided method acquires a high accuracy classification of breast cancer. However, researchers in[47] and[45] used WBC (original) dataset to train and test different DM algorithms. They respectively registered anaccuracy of 96.61% (SVM) and 98.13% (CART), despite a high execution time of CART.…”
mentioning
confidence: 99%