2023
DOI: 10.1016/j.patcog.2022.109192
|View full text |Cite
|
Sign up to set email alerts
|

Laplacian Lp norm least squares twin support vector machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 27 publications
0
7
0
Order By: Relevance
“…Unlike SVM, TSVM aims to generate two non-parallel planes such that each plane is closer to one of the two classes and as far away from the other as possible (Rezvani and Wang, 2021). Furthermore, the single large quadratic programming problem in SVM is transformed into two smaller quadratic programming problems, so that the computational time of TSVM is reduced to a quarter of that of traditional SVM (Rezvani and Wang, 2022;Xie et al, 2023). The speed advantage of TSVM partially compensates the shortcomings of existing SVMs when solving large-scale classification problems.…”
Section: Introductionmentioning
confidence: 99%
“…Unlike SVM, TSVM aims to generate two non-parallel planes such that each plane is closer to one of the two classes and as far away from the other as possible (Rezvani and Wang, 2021). Furthermore, the single large quadratic programming problem in SVM is transformed into two smaller quadratic programming problems, so that the computational time of TSVM is reduced to a quarter of that of traditional SVM (Rezvani and Wang, 2022;Xie et al, 2023). The speed advantage of TSVM partially compensates the shortcomings of existing SVMs when solving large-scale classification problems.…”
Section: Introductionmentioning
confidence: 99%
“…Chen et al [23] introduced the Laplacian least squares TSVM (Lap-LSTSVM), which solves the two linear equation systems, reducing the computational cost. In recently, in stead of using the L1 or L2 norm, Xie et al [24] improve the Lap-LSTSVM to Laplacian Lp norm least squares TSVM (Lap-LpLSTSVM). From the perspective of noise sensitivity and unstable for resampling in the Lap-TSVM , many techniques have been proposed to tackle these problems, such as IFLap-TSVM [25], SIFTSVM [26].…”
Section: Introductionmentioning
confidence: 99%
“…Wang proposes to use an upper bound L1 parametrization, 11 setting an upper limit of loss ε$$ \varepsilon $$, where the loss does not exceed ε$$ \varepsilon $$ regardless of the misclassify‐cation level of the sample, to limit the value of the loss function to ε$$ \varepsilon $$ ensure better robustness. Xie introduces the Lp‐norm regularization term, 12 to effectively exploit the geometric information embedded in the data, avoiding overfitting while improving the generalization ability of the algorithm. Liang modeled each uncertain sample as a random vector with a Gaussian distribution of the proposed model 13 .…”
Section: Introductionmentioning
confidence: 99%
“…the loss function to 𝜀 ensure better robustness. Xie introduces the Lp-norm regularization term, 12 to effectively exploit the geometric information embedded in the data, avoiding overfitting while improving the generalization ability of the algorithm. Liang modeled each uncertain sample as a random vector with a Gaussian distribution of the proposed model.…”
mentioning
confidence: 99%