2021
DOI: 10.1016/j.ijar.2021.08.006
|View full text |Cite
|
Sign up to set email alerts
|

Evidential instance selection for K-nearest neighbor classification of big data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(4 citation statements)
references
References 48 publications
0
2
0
Order By: Relevance
“… K-NN is a classification and regression method first proposed by Cover T and Hart P in 1967. 22 The K-NN method shows good performance in cases where the spatial dimensions of the input variables are small and the actual category boundaries are irregular. SVM is a supervised learning algorithm developed in 1992 by Boser, Guyon, and Vapnik based on statistical learning theory.…”
Section: Methodsmentioning
confidence: 99%
“… K-NN is a classification and regression method first proposed by Cover T and Hart P in 1967. 22 The K-NN method shows good performance in cases where the spatial dimensions of the input variables are small and the actual category boundaries are irregular. SVM is a supervised learning algorithm developed in 1992 by Boser, Guyon, and Vapnik based on statistical learning theory.…”
Section: Methodsmentioning
confidence: 99%
“…The K-Neighbors Classifier (KNC) [34] method operates by utilizing the k-nearest neighbors algorithm to predict pomegranate growth stages. It assesses the similarity of a given pomegranate sample to its k-nearest neighbors in a training dataset, classifying the growth stage based on the most prevalent stages within that neighborhood.…”
Section: ) K-neighbors Classifiermentioning
confidence: 99%
“…In the context of heart disease, KNN has been used to predict the likelihood of a person having heart disease based on various attributes. For example, one study achieved an accuracy rate of 86.95% using KNN with 12 attributes [37]. The selection of the K parameter in KNN affects the classification results by determining the number of nearest neighbors considered when making predictions.…”
Section: Literature Reviewmentioning
confidence: 99%