2018
DOI: 10.1016/j.ins.2018.06.007
|View full text |Cite
|
Sign up to set email alerts
|

Insensitive stochastic gradient twin support vector machines for large scale problems

Abstract: Stochastic gradient descent algorithm has been successfully applied on support vector machines (called PEGASOS) for many classification problems. In this paper, stochastic gradient descent algorithm is investigated to twin support vector machines for classification. Compared with PEGASOS, the proposed stochastic gradient twin support vector machines (SGTSVM) is insensitive on stochastic sampling for stochastic gradient descent algorithm. In theory, we prove the convergence of SGTSVM instead of almost sure conv… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
2

Relationship

3
6

Authors

Journals

citations
Cited by 44 publications
(6 citation statements)
references
References 38 publications
0
6
0
Order By: Relevance
“…Figure 2 visualizes the error rate from the predicted data to actual data for linear classification and Figure 3 visualizes the error rate from the predicted data to actual data for non-linear classification of the benchmark dataset. Table 2 describes the comparative analysis between the other models like SVM, TWSVM [17], TBSVM [21], EFSVM [4], U-NHSVM [10], SGTSVM [22], RTBSVM [15] with p-dist TWSVM based on the following metrics namely Accuracy, F-Measure, G-mean with their Elapsed Time for linear case in seven benchmark data sets. Our proposed method outperforms in linear kernel than Gaussian kernel case.…”
Section: Performance Evaluationmentioning
confidence: 99%
“…Figure 2 visualizes the error rate from the predicted data to actual data for linear classification and Figure 3 visualizes the error rate from the predicted data to actual data for non-linear classification of the benchmark dataset. Table 2 describes the comparative analysis between the other models like SVM, TWSVM [17], TBSVM [21], EFSVM [4], U-NHSVM [10], SGTSVM [22], RTBSVM [15] with p-dist TWSVM based on the following metrics namely Accuracy, F-Measure, G-mean with their Elapsed Time for linear case in seven benchmark data sets. Our proposed method outperforms in linear kernel than Gaussian kernel case.…”
Section: Performance Evaluationmentioning
confidence: 99%
“…For a large scale dataset X, the kernel function K(•, X) transforms the samples into a space with a much higher dimension than linear formation, resulting in a large amount of computations. However, the reduced kernel tricks [43], [44], which replaces K(•, X) with K(•, X), can reduce the computation efficiently, where X is selected from X randomly and its size is much smaller than X.…”
Section: Nonlinear Formationmentioning
confidence: 99%
“…In the following, we extend the RampTWSVC to nonlinear manifold clustering, and the solutions to the problems in linear and nonlinear RampTWSVC are elaborated in next subsetion. The plane-based clustering method can be extend to nonlinear manifold clustering easily by the kernel trick [20], [21]. By introducing a pre-defined kernel function K(•, •), the planebased nonlinear clustering seeks k cluster center manifolds in the kernel generated space as…”
Section: A Formationmentioning
confidence: 99%