2009
DOI: 10.1007/978-0-387-88615-2_4
|View full text |Cite
|
Sign up to set email alerts
|

k-Nearest Neighbor Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
76
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 226 publications
(99 citation statements)
references
References 0 publications
0
76
0
Order By: Relevance
“…The RD classifier acts as a proxy for the behaviour of the researchers who do not know much about model checking tools, while SD can be considered as mirroring the behaviour of experienced verification researchers who know the patterns supported by each tool and the fastest tools distribution, but do not know which is the best tool for a specific property to be checked on a specific model . The remaining five methods are: SVM classifier (Chang and Lin, 2011); LR (Yu et al , 2011); KNN classifier (Mucherino et al , 2009); and two types of ensemble methods, namely, ERT (Geurts et al , 2006) and RFs (Breiman, 2001) (despite their names these are not random classifiers, but ensemble classifiers). We used the scikit-learn library (Pedregosa et al , 2011) implementation of these classifiers in our experiments.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The RD classifier acts as a proxy for the behaviour of the researchers who do not know much about model checking tools, while SD can be considered as mirroring the behaviour of experienced verification researchers who know the patterns supported by each tool and the fastest tools distribution, but do not know which is the best tool for a specific property to be checked on a specific model . The remaining five methods are: SVM classifier (Chang and Lin, 2011); LR (Yu et al , 2011); KNN classifier (Mucherino et al , 2009); and two types of ensemble methods, namely, ERT (Geurts et al , 2006) and RFs (Breiman, 2001) (despite their names these are not random classifiers, but ensemble classifiers). We used the scikit-learn library (Pedregosa et al , 2011) implementation of these classifiers in our experiments.…”
Section: Resultsmentioning
confidence: 99%
“…We have compared seven methods; five of them are powerful and widely used algorithms, namely, support vector machine classifier (SVM) (Chang and Lin, 2011), logistic regression (LR) (Yu et al , 2011), K-nearest neighbour classifier (KNN) (Mucherino et al , 2009), extremely randomized trees (ERT) (Geurts et al 2006) and random forests (RFs) (Breiman, 2001) and two of the classifiers are for baseline predictions, namely, Random Dummy (RD) and Stratified Dummy (SD). We used 10-fold cross-validation for training and testing the classifiers.…”
Section: Methodsmentioning
confidence: 99%
“…A two-stage approach is proposed here to identify the outliers. First of all, the SUT K-Nearest Neighbors [35] are extracted from the matrix F M−1 , obtaining K different signatures f k , again, stacked in a matrix F k .…”
Section: Intrusion Attack Detection Algorithmmentioning
confidence: 99%
“…[17] used k-nearest neighbor classification techniques to classify the agricultural data using eulidean distance hamming distance and manhattan distance methods. Unda-Trillas, E. et, al [18] described the methods to build classification and regression trees(CART) which uses Gini Index as impurity index that is a generalization of bionomial variance. Berk, R.A et.…”
Section: Releated Workmentioning
confidence: 99%