2017
DOI: 10.1016/j.neucom.2016.10.041
|View full text |Cite
|
Sign up to set email alerts
|

PSO-based method for SVM classification on skewed data sets

Abstract: Over the last years, Support Vector Machines (SVMs) have become a successful approach in classification problems. However, the performance of SVMs is affected harshly by skewed data sets. An SVM learns a biased model that affects the performance of the classifier. Furthermore, SVMs are typically unsuccessful on data sets where the imbalanced ratio is very large. Lately, several techniques have been used to tackle this disadvantage by generating artificial instances. Artificial data instances attempt to add inf… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 71 publications
(21 citation statements)
references
References 42 publications
0
13
0
Order By: Relevance
“…Firstly, it can find an optimal hyperplane with the highest classification boundary in the n-dimensional feature space. This prevents the classifier from falling into local minima [63], which is the case for ANN. Secondly, SVM can minimize unseen errors in training samples [78] and, thus, a higher classification accuracy [79].…”
Section: Comparison Of Different Classifier Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Firstly, it can find an optimal hyperplane with the highest classification boundary in the n-dimensional feature space. This prevents the classifier from falling into local minima [63], which is the case for ANN. Secondly, SVM can minimize unseen errors in training samples [78] and, thus, a higher classification accuracy [79].…”
Section: Comparison Of Different Classifier Resultsmentioning
confidence: 99%
“…If the samples are linearly separable, a linear discriminant function is established by constructing the classification surface to ensure the maximum distance between the samples. If the samples are linearly inseparable, the SVM projects the training samples to a high-dimensional space and finds the optimal classifying hyperplane [63].…”
Section: Support Vector Machine Classifiermentioning
confidence: 99%
“…SVM classification can be obtained by Equation (11) 5 Wireless Communications and Mobile Computing related parameters of the ith feature vectors. The mapped sample inner product is replaced by the kernel function, which can effectively solve the linear inseparable sample classification problem [25].…”
Section: Wireless Communications and Mobile Computingmentioning
confidence: 99%
“…The kernel function parameter g represents the kernel function parameter γ, k in the g represents the number of attributes in the input data. Thus, the hit rate of the roll gap value prediction model is governed by these two parameters [20][21][22][23][24][25][26][27][28][29].…”
Section: Pso-svm Modelmentioning
confidence: 99%