The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2017
DOI: 10.5815/ijcnis.2017.11.04
|View full text |Cite
|
Sign up to set email alerts
|

Comparative Analysis of KNN Algorithm using Various Normalization Techniques

Abstract: Abstract-Classification is the technique of identifying and assigning individual quantities to a group or a set. In pattern recognition, K-Nearest Neighbors algorithm is a non-parametric method for classification and regression. The K-Nearest Neighbor (kNN) technique has been widely used in data mining and machine learning because it is simple yet very useful with distinguished performance. Classification is used to predict the labels of test data points after training sample data. Over the past few decades, r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
47
1
8

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 104 publications
(57 citation statements)
references
References 12 publications
1
47
1
8
Order By: Relevance
“…The test results in this study inform that the z-score normalization method has a stable accuracy between 95% to 97%. The accuracy value of the z-score method found in this study is higher than the results of research conducted by Pandey and Jain (2017) [5] on the IRIS data set, and Nasution et al (2019) [6] regarding the wine data set.…”
Section: Resultscontrasting
confidence: 87%
“…The test results in this study inform that the z-score normalization method has a stable accuracy between 95% to 97%. The accuracy value of the z-score method found in this study is higher than the results of research conducted by Pandey and Jain (2017) [5] on the IRIS data set, and Nasution et al (2019) [6] regarding the wine data set.…”
Section: Resultscontrasting
confidence: 87%
“…Hasil akurasi klasifikasi yang diperoleh peneliti dengan menggunakan 5-fold cross validation sebesar 98,35%. Selanjutnya pada tahun 2017 Amit pandey [7]…”
Section: Pendahuluanunclassified
“…The dataset was split into train and test sets using a 10-Fold approach. Biomarker features were imputed on the test set using the mean value of the K most similar patients from the real biomarker data of the train set using the KNN algorithm [24]. The value of K is determined by the amount of available data.…”
Section: Datasets and Pre-processmentioning
confidence: 99%