2017
DOI: 10.1007/978-3-319-69900-4_3
|View full text |Cite
|
Sign up to set email alerts
|

kNN Classification with an Outlier Informative Distance Measure

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
1
0
2

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 12 publications
0
1
0
2
Order By: Relevance
“…Finally, the sensitivity analysis further proves that the K-NN classifier can be exploited only when a limited amount of data is incorrectly labeled or in the case that a limited number of outliers is included within the dataset [44]. In fact, both accuracy and posterior probability of the K-NN classifier significantly decrease when the rate of UMI is approximately equal to 20%, so that only 66% of data are correctly labeled.…”
Section: Sensitivity Analysismentioning
confidence: 75%
“…Finally, the sensitivity analysis further proves that the K-NN classifier can be exploited only when a limited amount of data is incorrectly labeled or in the case that a limited number of outliers is included within the dataset [44]. In fact, both accuracy and posterior probability of the K-NN classifier significantly decrease when the rate of UMI is approximately equal to 20%, so that only 66% of data are correctly labeled.…”
Section: Sensitivity Analysismentioning
confidence: 75%
“…Bhattacharya dkk. [6] menyebutkan keberadaan outlier pada dataset dapat mempengaruhi akurasi algoritme kNN. Lebih lanjut, kajian tersebut menyatakan bahwa skor outlier berdasarkan perbedaan tingkat kepadatan dapat digunakan untuk memodulasi fungsi jarak potensial pada klasifikasi kNN dalam upaya meningkatkan hasil akurasi pada dataset yang memiliki outlier.…”
Section: Pendahuluanunclassified
“…Berbeda dengan [6] dan [7] dalam meningkatkan hasil akurasi klasifikasi, penelitian ini mengusulkan prapemrosesan untuk menghilangkan outlier pada dataset untuk meningkatkan hasil akurasi klasifikasi algoritme kNN. Berbeda dengan [10] dan [11] dalam menerapkan algoritme K-means dalam mendeteksi outlier, penelitian ini mengusulkan deteksi outlier dengan menggunakan algoritme K-means dan matriks jarak pada dataset yang sudah memiliki kelas label.…”
Section: Pendahuluanunclassified
“…The advantages of this algorithm include fast training, simple and easy to learn, and effective in training big data [8]. However, KNN can be affected by outliers, as a result, its accuracy is not good enough [9]. Another method used in KNN is Subspace Outlier Detection (SOD), which is an outlier detection method that models outliers on high-dimensional data.…”
Section: Introductionmentioning
confidence: 99%