2007
DOI: 10.1016/j.patcog.2007.03.006
|View full text |Cite
|
Sign up to set email alerts
|

On using prototype reduction schemes to optimize dissimilarity-based classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2008
2008
2014
2014

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 21 publications
(26 citation statements)
references
References 17 publications
0
26
0
Order By: Relevance
“…For example, Lozano et al [13] employed prototype optimization methods often applied in vector spaces, such as editing and condensing, for constructing more general dissimilarity-based classifiers. Kim and Oommen [8] used the well-known condensed nearest neighbor rule [14] to reduce the original training set before computing the dissimilarity-based classifiers on the entire data. Other new methods have been evolved to be applied in the dissimilarity space, such as Kcentres, Edicon, ModeSeek, Featsel and a genetic algorithm [6,9].…”
Section: Prototype Selection Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, Lozano et al [13] employed prototype optimization methods often applied in vector spaces, such as editing and condensing, for constructing more general dissimilarity-based classifiers. Kim and Oommen [8] used the well-known condensed nearest neighbor rule [14] to reduce the original training set before computing the dissimilarity-based classifiers on the entire data. Other new methods have been evolved to be applied in the dissimilarity space, such as Kcentres, Edicon, ModeSeek, Featsel and a genetic algorithm [6,9].…”
Section: Prototype Selection Methodsmentioning
confidence: 99%
“…In this context, prototype selection constitutes one of the most active research lines, which has primarily been addressed in two ways: (i) finding a small representation set capable of generating a low-dimensional dissimilarity space [4,6,7], and (ii) reducing the original dissimilarity matrix [8,9].…”
Section: D(t T ) To D(t R)mentioning
confidence: 99%
“…Inside the family of supervised classifiers, we can find the nearest neighbour (NN) rule method [1,2] that predicts the class of a new prototype by computing a similarity [3,4] measure between it and all prototypes from the training set, called the k-nearest neighbours (k-NN) classifier. Recent studies show that k-NN classifier could be improved by employing numerous procedures.…”
Section: Introductionmentioning
confidence: 99%
“…The major questions we encountered when designing DBCs are summarized as follows: (1) how to select prototypes; (2) how to measure dissimilarities between object samples; and (3) how to design classifiers in the dissimilarity space. Several strategies have been used to explore these questions [3], [4], [5]. The details of the strategies are omitted here, but we now attempt to explain the first question in the present paper.…”
Section: Introductionmentioning
confidence: 99%
“…In these methods, a training set, T , is pruned to yield a set of representative prototypes, Y , where, without loss of generality, |Y | ≤ |T |. On the other hand, by invoking a prototype reduction scheme (PRS), Kim and Oommen [5] also obtained a representative subset, Y , which is utilized by the DBC. Aside from using PRSs, Kim and Oommen simultaneously proposed the use of the Mahalanobis distance as the dissimilarity-measurement criterion.…”
Section: Introductionmentioning
confidence: 99%