1997
DOI: 10.1109/34.566814
|View full text |Cite
|
Sign up to set email alerts
|

A bootstrap technique for nearest neighbor classifier design

Abstract: Abstract-A bootstrap technique for nearest neighbor classifier design is proposed. Our primary interest in designing a classifier is in small training sample size situations. Conventional bootstrapping techniques sample the training samples with replacement. On the other hand, our technique generates bootstrap samples by locally combining original training samples. The nearest neighbor classifier is designed on the bootstrap samples and is tested on the test samples independent of training samples. The perform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
69
1
2

Year Published

1998
1998
2011
2011

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 105 publications
(72 citation statements)
references
References 19 publications
(1 reference statement)
0
69
1
2
Order By: Relevance
“…To overcome the difficulty with the sparsity of data in such spaces, researchers have investigated using a decision threshold [1] [4] and enhancing the training set by bootstrapping [5].…”
Section: Introductionmentioning
confidence: 99%
“…To overcome the difficulty with the sparsity of data in such spaces, researchers have investigated using a decision threshold [1] [4] and enhancing the training set by bootstrapping [5].…”
Section: Introductionmentioning
confidence: 99%
“…The schemes reported by Chang [5] (the PNN) 5 , Xie et al [10] (the VQ), Hamamoto et al [11] (the BT), Song and Lee [13] (the LVQ) and Kim and Oommen [19] (HYB), create new prototype vectors (and do not merely select training samples) in such a way that these prototypes represent all the vectors in the original set in the "best" possible manner. The methods of the Hart [3] (the CNN), Gates [4] (the RNN), Ritter et al [6] (the SNN), Tomek [7] (the mCNN), Devijer and Kittler [8] (the ENN), Fukunaga [9] (the PZN), Bezdek [15] (the GA and RS), Sanchez et al [16] (the PG), and Lipowezky [17] (the PF) are those in which the prototype vectors are merely selected.…”
Section: A Taxonomy Of Prsmentioning
confidence: 99%
“…In other cases, we cannot estimate, a priori the maximum number of iterations which should be done. The former is the criterion employed in the CNN [3], RNN [4], SNN [6], mCNN [7], ENN [8], VQ [10], BT [11] and GA (RS) [15], and the latter is the criterion used in the PNN [5], PZN [9], LVQ [13], the PG [16] and HYB [19] 6 .…”
Section: A Taxonomy Of Prsmentioning
confidence: 99%
See 1 more Smart Citation
“…• Selection of a design subset of prototypes from a given set of prototype vectors [5,8,11,18,21] and generation of prototype reference vectors [7].…”
Section: Related Workmentioning
confidence: 99%