2019 4th International Conference on Electrical Information and Communication Technology (EICT) 2019
DOI: 10.1109/eict48899.2019.9068790
|View full text |Cite
|
Sign up to set email alerts
|

Prediction of Cancer Using Logistic Regression, K-Star and J48 algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 4 publications
0
4
0
Order By: Relevance
“…Then the data was evaluated only with the lazy algorithms, from the error it was possible to determine the high performance of the instancebased algorithms (LW, IBK and KSTAR). While Maliha et al (2019) to predict the causes and appearance of cancer found when using algorithms J-48 and KSTAR that in logistic regression the accuracy is 99,3%; for KSTAR it was 99,5% and J-48 is 99,1 %.…”
Section: Resultsmentioning
confidence: 99%
“…Then the data was evaluated only with the lazy algorithms, from the error it was possible to determine the high performance of the instancebased algorithms (LW, IBK and KSTAR). While Maliha et al (2019) to predict the causes and appearance of cancer found when using algorithms J-48 and KSTAR that in logistic regression the accuracy is 99,3%; for KSTAR it was 99,5% and J-48 is 99,1 %.…”
Section: Resultsmentioning
confidence: 99%
“…This study aimed to construct an intelligent predictive model via leveraging the selected ML algorithms to predict the BC and effectively differentiate between positive and negative BC cases. We trained six well-known classification algorithms, including AB, LR, MLP NB, J-48, and RF, according to the top related parame- (34,35), lesion biopsy (21), blood tests (36), etc. However, we considered more cost-benefit and available data with the minor intervention features for our prediction models.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, research samples can be classified with the highest performance and most discriminative capability. The capability of J-48 decision tree algorithms allow to embed the continuous variables for DM, use the most technical features to prevent overfitting, and adjust the decision size with confidence factors (35,36).…”
Section: Model Development and Assessmentmentioning
confidence: 99%
“…It differs from other instance based learners in that it uses an entropy based distance function. [ 56 57 ]…”
Section: Methodsmentioning
confidence: 99%