1999
DOI: 10.1109/72.809092
|View full text |Cite
|
Sign up to set email alerts
|

Sample selection via clustering to construct support vector-like classifiers

Abstract: Abstract-This paper explores the possibility of constructing RBF classifiers which, somewhat like support vector machines, use a reduced number of samples as centroids, by means of selecting samples in a direct way. Because sample selection is viewed as a hard computational problem, this selection is done after a previous vector quantization: this way obtaining also other similar machines using centroids selected from those that are learned in a supervised manner. Several forms of designing these machines are … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2001
2001
2018
2018

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 60 publications
(23 citation statements)
references
References 22 publications
0
23
0
Order By: Relevance
“…Lyhyaoui et al (1999) indicate their theoretical advantages: (i) clustering-based techniques can always eliminate the useless vectors from T , (ii) they are applicable to multi-class problems, (iii) their cost objectives may be freely established for a given problem. However, these methods suffer from a difficult problem of determining a potentially large number of parameters (the clustering parameters, and the number of vectors annotated as important for each cluster are the most important parameters).…”
Section: Clustering-based Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Lyhyaoui et al (1999) indicate their theoretical advantages: (i) clustering-based techniques can always eliminate the useless vectors from T , (ii) they are applicable to multi-class problems, (iii) their cost objectives may be freely established for a given problem. However, these methods suffer from a difficult problem of determining a potentially large number of parameters (the clustering parameters, and the number of vectors annotated as important for each cluster are the most important parameters).…”
Section: Clustering-based Methodsmentioning
confidence: 99%
“…This procedure is most often performed for each class in T independently. Lyhyaoui et al (1999) applied the frequency-sensitive competitive learning to cluster training set vectors (Scheunders and Backer 1999), with various numbers of centroids for each class. Once centroids are determined, they are further analyzed to extract the most important (critical) centroids.…”
Section: Clustering-based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In general, the receiver can have access to a block of training samples . For notational convenience, let and , and denote the training set of noisy received signal vectors as (22) and the set of corresponding class labels as (23) Applying the standard SVM method [19], an SVM detector can be constructed for user (24) where the set of Lagrangian multipliers , denoted in the vector form (25) is the solution of the quadratic programming (QP) (26) with the constraints (27) and (28) In this particular application, it is obviously advantageous to choose the Gaussian kernel function (29) where the width parameter is related to the root mean square of the channel noise, an estimate of which can be obtained. The offset constant is usually determined from the "margin" SVs, i.e., those s with the corresponding Lagrangian multipliers .…”
Section: The Support Vector Machine Detectormentioning
confidence: 99%
“…to select the most important samples) plays an important role. Plenty of work for sample selection has been done, including for example based on clustering methods [4,5], Mahalanobis distance [6], !-skeleton and Hausdorffdistance [7,8], and the information theory [9,10]. Although much research progress has been achieved, problems still remain.…”
Section: Introductionmentioning
confidence: 99%