The 2010 International Joint Conference on Neural Networks (IJCNN) 2010
DOI: 10.1109/ijcnn.2010.5596745
|View full text |Cite
|
Sign up to set email alerts
|

Efficient SVM training with reduced weighted samples

Abstract: This paper presents an efficient training approach for support vector machines that will improve their ability to learn from a large or imbalanced data set. Given an original training set, the proposed approach applies unsupervised learning to extract a smaller set of salient training exemplars, which are represented by weighted cluster centers and the target outputs. In subsequent supervised learning, the objective function is modified by introducing a weight for each new training sample and the corresponding… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…In this way, if both sensitivity and specificity are high, the geometric mean G is also a high value; however, if one of the component accuracies-sensitivity or specificity-is low, the geometric mean G is affected by it. Thus, a classifier with high geometric mean is highly desirable for class-specific mapping [59], and hence G can be used to fine-tune binary algorithms of classification.…”
Section: Free-parameter Tuningmentioning
confidence: 99%
“…In this way, if both sensitivity and specificity are high, the geometric mean G is also a high value; however, if one of the component accuracies-sensitivity or specificity-is low, the geometric mean G is affected by it. Thus, a classifier with high geometric mean is highly desirable for class-specific mapping [59], and hence G can be used to fine-tune binary algorithms of classification.…”
Section: Free-parameter Tuningmentioning
confidence: 99%
“…others class) and to the degree in which the positive class (i.e. class of interest) is neglected (Nguyen et al 2010). …”
Section: Comparison and Evaluation Of Classifiersmentioning
confidence: 99%
“…The methods in [48]- [51], which are called "SVM via clustering," detect cluster centers using an unsupervised clustering method such as k-means clustering and leave only the cluster centers [48]- [50] or learning samples near the cluster centers [51]. In the case of [50], it gives weight to each remaining cluster center considering their cluster size to compensate for the information loss that might be caused by the learning sample removal. The methods in [52]- [54], which are called "reduced SVM (RSVM)," assume that randomly selected learning samples can represent all learning samples.…”
Section: Learning Sample Selection Of Prepruningmentioning
confidence: 99%