2012
DOI: 10.1007/s11432-012-4679-3
|View full text |Cite
|
Sign up to set email alerts
|

Sparse kernel logistic regression based on L 1/2 regularization

Abstract: The sparsity driven classification technologies have attracted much attention in recent years, due to their capability of providing more compressive representations and clear interpretation. Two most popular classification approaches are support vector machines (SVMs) and kernel logistic regression (KLR), each having its own advantages. The sparsification of SVM has been well studied, and many sparse versions of 2-norm SVM, such as 1-norm SVM (1-SVM), have been developed. But, the sparsification of KLR has bee… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 26 publications
0
6
0
Order By: Relevance
“…is monotonically decreasing with respect to , there exists a such that and . We will proceed to show that whenever , it holds Actually, we have (27) By Proposition 2, for any , (28) and for any ,…”
Section: Sincementioning
confidence: 99%
See 1 more Smart Citation
“…is monotonically decreasing with respect to , there exists a such that and . We will proceed to show that whenever , it holds Actually, we have (27) By Proposition 2, for any , (28) and for any ,…”
Section: Sincementioning
confidence: 99%
“…Furthermore, through developing a thresholding representation theory, an iterative half thresholding algorithm (called half algorithm in brief) was proposed in [20] for fast solution to regularization. Inspired by the well developed theoretical properties and the fast algorithm, regularization has been successfully used to many applications including hyperspectral unmixing [24], synthetic aperture radar (SAR) imaging [25], [26], machine learning [27], [28], gene selection [29] and practical engineering [30].…”
Section: Introductionmentioning
confidence: 99%
“…By combing the multiclass elastic net penalty (18) with the multinomial likelihood loss function (17), we propose the following multinomial regression model with the elastic net penalty:…”
Section: Multinomial Regression With the Multiclass Elastic Netmentioning
confidence: 99%
“…Note that the logistic loss function not only has good statistical significance but also is second order differentiable. Hence, the regularized logistic regression optimization models have been successfully applied to binary classification problem [15][16][17][18][19]. Multinomial regression can be obtained when applying the logistic regression to the multiclass classification problem.…”
Section: Introductionmentioning
confidence: 99%
“…Kernel methods [1,2] have been successfully used in pattern recognition and machine learning. Since the performance of kernel methods greatly depends on the selection of the kernel function, the kernel selection problem becomes an important topic in kernel methods [3][4][5].…”
Section: Introductionmentioning
confidence: 99%