2007
DOI: 10.1142/s0218001407005703
|View full text |Cite
|
Sign up to set email alerts
|

A Weighted Support Vector Machine for Data Classification

Abstract: This paper presents a weighted support vector machine (WSVM) to improve the outlier sensitivity problem of standard support vector machine (SVM) for two-class data classification. The basic idea is to assign different weights to different data points such that the WSVM training algorithm learns the decision surface according to the relative importance of data points in the training data set. The weights used in WSVM are generated by a robust fuzzy clustering algorithm, kernel-based possibilistic c-means (KPCM)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
82
0
1

Year Published

2009
2009
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 171 publications
(83 citation statements)
references
References 17 publications
0
82
0
1
Order By: Relevance
“…The training samples in an LSVM can be used in the calibration process with varying levels of importance [47]. In geographical problems, this importance can be defined by weights that are assigned to training samples based on their spatial similarity to the focal location of calibration.…”
Section: Local Support Vector Machinesmentioning
confidence: 99%
See 1 more Smart Citation
“…The training samples in an LSVM can be used in the calibration process with varying levels of importance [47]. In geographical problems, this importance can be defined by weights that are assigned to training samples based on their spatial similarity to the focal location of calibration.…”
Section: Local Support Vector Machinesmentioning
confidence: 99%
“…Considering the generalisation error, Ralaivola and d'Alché-Buc [46] presented a methodology for training SVMs known as incremental learning, which is based on the number of nearest neighbours. Yang et al [47] proposed a weighted SVM model that was used to locally classify data based on the similarity between the training and testing data sets [48,49].…”
Section: Introductionmentioning
confidence: 99%
“…Sampling methods focus on balancing the distributions of data points between classes while cost-sensitive learning methods take into account the costs associated with misclassifying data points [45][46][47][48][49][50][51] . It considers the variable cost of a misclassification of the different classes.…”
Section: Cost -Sensitive Solutionsmentioning
confidence: 99%
“…Examples include assigning a weight to each class or learning from one class (recognition-based) rather than two classes (discrimination-based) [17]. Weighted Support Vector Machines (SVMs) [18] assign distinct weights to data samples so that the training algorithm learns the decision surface according to the relative importance of data points in the training dataset. Fuzzy Support Vector Machines [19] is a version of weighted SVMs that applies a fuzzy membership to each input sample and reformulates the SVMs so that input points make different contributions to the learning of decision surface.…”
Section: Understanding Bgp Datamentioning
confidence: 99%