2024
DOI: 10.1371/journal.pone.0295632
|View full text |Cite
|
Sign up to set email alerts
|

Improving prediction of cervical cancer using KNN imputer and multi-model ensemble learning

Turki Aljrees

Abstract: Cervical cancer is a leading cause of women’s mortality, emphasizing the need for early diagnosis and effective treatment. In line with the imperative of early intervention, the automated identification of cervical cancer has emerged as a promising avenue, leveraging machine learning techniques to enhance both the speed and accuracy of diagnosis. However, an inherent challenge in the development of these automated systems is the presence of missing values in the datasets commonly used for cervical cancer detec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 65 publications
0
1
0
Order By: Relevance
“…To enable it to more accurately represent the underlying structure of the data and the relationships between samples, the KNN algorithm takes into account the similarity between samples and fills in the missing values using information from the samples that are closest to each other. Only the general distribution features of the variables are taken into consideration, as opposed to the mode and mean filling; the correlation between samples is not taken into account either [ 19 ].…”
Section: Methodsmentioning
confidence: 99%
“…To enable it to more accurately represent the underlying structure of the data and the relationships between samples, the KNN algorithm takes into account the similarity between samples and fills in the missing values using information from the samples that are closest to each other. Only the general distribution features of the variables are taken into consideration, as opposed to the mode and mean filling; the correlation between samples is not taken into account either [ 19 ].…”
Section: Methodsmentioning
confidence: 99%
“…To enable it to more accurately represent the underlying structure of the data and the relationships between samples, the KNN algorithm takes into account the similarity between samples and lls in the missing values using information from the samples that are closest to each other. Only the general distribution features of the variables are taken into consideration, as opposed to the mode and mean lling; the correlation between samples is not taken into account either [19].…”
Section: Data Processingmentioning
confidence: 99%