DOI: 10.1007/978-3-540-87479-9_50
|View full text |Cite
|
Sign up to set email alerts
|

Improving k-Nearest Neighbour Classification with Distance Functions Based on Receiver Operating Characteristics

Abstract: Abstract. The k-nearest neighbour (k-NN) technique, due to its interpretable nature, is a simple and very intuitively appealing method to address classification problems. However, choosing an appropriate distance function for k-NN can be challenging and an inferior choice can make the classifier highly vulnerable to noise in the data. In this paper, we propose a new method for determining a good distance function for k-NN. Our method is based on consideration of the area under the Receiver Operating Characteri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
5
0

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 21 publications
1
5
0
Order By: Relevance
“…For the next step, we compared our models as well as the baseline XCS with several other machine learning methods, either with a feature selection method in place or without using any feature selection. The results are presented in Table 3, which are cross-checked with the results published by Hossain et al 1 and Hassan et al 2 From Figures 3 and 4, we can claim that on average Information Gain delivers better results than other feature ranking methods for our models. Thus, we have used information Gain as the feature selection approach in conjunction with FS-XCS, GRD-XCS and other machine learning methods.…”
Section: B) Execution Time Resultssupporting
confidence: 68%
“…For the next step, we compared our models as well as the baseline XCS with several other machine learning methods, either with a feature selection method in place or without using any feature selection. The results are presented in Table 3, which are cross-checked with the results published by Hossain et al 1 and Hassan et al 2 From Figures 3 and 4, we can claim that on average Information Gain delivers better results than other feature ranking methods for our models. Thus, we have used information Gain as the feature selection approach in conjunction with FS-XCS, GRD-XCS and other machine learning methods.…”
Section: B) Execution Time Resultssupporting
confidence: 68%
“…In this paper, the selected features are applied to classification algorithms such as KNN [37] and decision tree [38]. The two classifiers are trained on the selected features based on binary classifiers by labeling zero as non-coronavirus and one as coronavirus images.…”
Section: Classificationmentioning
confidence: 99%
“…A similar attempt is proposed by Wang et al [7], where the authors proved that the a simple adaptive distance measure, which basically normalizes the ordinary Euclidean or Manhattan distance from an unknown pattern to each labeled training sample -by the shortest distance between the corresponding training sample to training samples belonging to different class -, can be considered the leading factor to improve upon the original kNN classifier. Hassan et al [1] proposed a weighting technique based on ROC (Receiver Operating Characteristic) curve.…”
Section: Related Workmentioning
confidence: 99%
“…These metrics due to their properties (symmetry, identity and triangle inequality) can be used on all kind of metric spaces, however this advantage is their disadvantage too. For these norms, the different feature dimensionalities are weighted equally [1], which can be considered a major disadvantage [2]. In many cases different features contribute differently in the feature space, thus being inadequate for proper classical distance calculus.…”
Section: Introductionmentioning
confidence: 99%