2018
DOI: 10.1109/tnnls.2017.2705429
|View full text |Cite
|
Sign up to set email alerts
|

Improving Sparsity and Scalability in Regularized Nonconvex Truncated-Loss Learning Problems

Abstract: The truncated regular -loss support vector machine can eliminate the excessive number of support vectors (SVs); thus, it has significant advantages in robustness and scalability. However, in this paper, we discover that the associated state-of-the-art solvers, such as difference convex algorithm and concave-convex procedure, not only have limited sparsity promoting property for general truncated losses especially the -loss but also have poor scalability for large-scale problems. To circumvent these drawbacks, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 20 publications
0
9
0
Order By: Relevance
“…There have been many researchers using non-convex loss function to weaken the influence of outliers. For example, Shen et al [13], Collobert et al [4], and Wu and Liu [25] study the robust SVM with the truncated hinge loss; Tao et al [20] study the robust SVM with the truncated hinge loss and the truncated squared hinge loss. Based on DCA (difference of convex algorithm) procedure [21,26], all those studies have given algorithms to iteratively solve L1/L2-SVM to obtain the solutions of their proposed non-convex models.…”
Section: Arxiv:200609111v1 [Csmentioning
confidence: 99%
See 4 more Smart Citations
“…There have been many researchers using non-convex loss function to weaken the influence of outliers. For example, Shen et al [13], Collobert et al [4], and Wu and Liu [25] study the robust SVM with the truncated hinge loss; Tao et al [20] study the robust SVM with the truncated hinge loss and the truncated squared hinge loss. Based on DCA (difference of convex algorithm) procedure [21,26], all those studies have given algorithms to iteratively solve L1/L2-SVM to obtain the solutions of their proposed non-convex models.…”
Section: Arxiv:200609111v1 [Csmentioning
confidence: 99%
“…However, the inner loop of these algorithms is computationally expensive. For example, in Collobert et al [4], Wu and Liu [25], Feng et al [5], Tao et al [20], they solve a constrained quadratic programming (QP) defined by L1/L2-SVM or re-weighted L2-SVM, and all state-of-the-art methods for those quadratic programming require lots of iterations. In Tao et al [20], some efficient techniques based on the coordinate descent are given to reduce the cost of the inner loop, but it still needs to solve L1/L2-SVM maybe with a smaller size.…”
Section: Arxiv:200609111v1 [Csmentioning
confidence: 99%
See 3 more Smart Citations