2019
DOI: 10.3906/elk-1807-163
|View full text |Cite
|
Sign up to set email alerts
|

A depth-based nearest neighbor algorithm for high-dimensional data classification

Abstract: Nearest neighbor algorithms like k-nearest neighbors (kNN) are fundamental supervised learning techniques to classify a query instance based on class labels of its neighbors. However, quite often, huge volumes of datasets are not fully labeled and the unknown probability distribution of the instances may be uneven. Moreover, kNN suffers from challenges like curse of dimensionality, setting the optimal number of neighbors, and scalability for high-dimensional data. To overcome these challenges, we propose an im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…While K-NN and GRNN are reported and used to be two different prediction methods, they are both based on Radial Basis Function Kernel (RBF Kernel) network and both do not require an iterative training procedure as back propagation networks. They approximate any arbitrary function between input and output data set, drawing the function estimate directly from the training data [13] Meanwhile the main difference between these two methods is that; while GRNN uses all of the training data sets for prediction of the output of a query, K-NN uses k number of data sets nearest to the query data set for prediction [14]. Like GRNN, K-NN also finds the distances between a query and all the examples in the training data set but unlike GRNN it only selects the specified number (k) of examples closest to the query and determines the similarities of only K-nearest neighbour to the query.…”
Section: Methods Used K-nearest Neighbour (K-nn) and Generalized Regression Neural Network (Grnn)mentioning
confidence: 99%
“…While K-NN and GRNN are reported and used to be two different prediction methods, they are both based on Radial Basis Function Kernel (RBF Kernel) network and both do not require an iterative training procedure as back propagation networks. They approximate any arbitrary function between input and output data set, drawing the function estimate directly from the training data [13] Meanwhile the main difference between these two methods is that; while GRNN uses all of the training data sets for prediction of the output of a query, K-NN uses k number of data sets nearest to the query data set for prediction [14]. Like GRNN, K-NN also finds the distances between a query and all the examples in the training data set but unlike GRNN it only selects the specified number (k) of examples closest to the query and determines the similarities of only K-nearest neighbour to the query.…”
Section: Methods Used K-nearest Neighbour (K-nn) and Generalized Regression Neural Network (Grnn)mentioning
confidence: 99%
“…Then it selects the active user's neighbourhood (top similar neighbours) to calculate the prediction value of an item. Finally, the algorithm compares the prediction value to a given threshold for the recommendation decision [17]. In order to reduce the computational complexity, model CF uses a part of the rating matrix to estimate or learn a model that generates the predictions.…”
Section: Traditional Approachesmentioning
confidence: 99%