New Fundamental Technologies in Data Mining 2011
DOI: 10.5772/13971
|View full text |Cite
|
Sign up to set email alerts
|

Classifiers Based on Inverted Distances

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 19 publications
(17 reference statements)
0
6
0
Order By: Relevance
“…But because our met h o d (Eb kNN) i s not related in any way to either of these previous works, I merely make a mention of them here and do not discuss t h em. Also because our work (EbkNN) is neither an advancement over these works [27,2,28,29,30,31,32,33] n o r is d eriv ed from them, we do not immediately conclude or claim in this paper that EbkNN outperforms a subset of these and is inferior to the rest. Moreover, we do not state in this paper any mathematical theorem giving the lower and upper bounds on the probability of error of EbkNN.…”
Section: Introductionmentioning
confidence: 83%
See 1 more Smart Citation
“…But because our met h o d (Eb kNN) i s not related in any way to either of these previous works, I merely make a mention of them here and do not discuss t h em. Also because our work (EbkNN) is neither an advancement over these works [27,2,28,29,30,31,32,33] n o r is d eriv ed from them, we do not immediately conclude or claim in this paper that EbkNN outperforms a subset of these and is inferior to the rest. Moreover, we do not state in this paper any mathematical theorem giving the lower and upper bounds on the probability of error of EbkNN.…”
Section: Introductionmentioning
confidence: 83%
“…They took k values so large as 30, 45 and 65; and fou n d t hat the performance of the kNN on Reuters versions 3 and 4 was one of the best. There are a plethora of other literature t hat solves the problem of optimizing performance of kNN by fixing k [30,31,32,33]. But because our met h o d (Eb kNN) i s not related in any way to either of these previous works, I merely make a mention of them here and do not discuss t h em.…”
Section: Introductionmentioning
confidence: 96%
“…For the classification of the pure isomers (Figure a), 300 data points, i.e., 100 data points of each isomer, were used. The K -value in kNN is set to 40, corresponding to the square root of the number of points, here 1600, following a general recommendation for setting K . (Varying the K -value by 50% did not result in a significant change of the classification results.) The kNN classification was performed using 10-fold cross-validation, with 90% of the data points used as the training set (i.e., 1440 points) and 10% as the test set (160 points).…”
Section: Methodsmentioning
confidence: 99%
“…We retrieve the latent space vectors for all flipped training images as well and used only a single image per scene (i.e., not all 10 variations). We chose k = √ N = 115, where N is the size of the training data together with its flipped version [84]. We froze the same layers of the pretrained models for fine-tuning the later layers in case of classification models or to train our autoencoder using it as an extractor.…”
Section: Sviro To Ticammentioning
confidence: 99%