Proceedings of the 2010 SIAM International Conference on Data Mining 2010
DOI: 10.1137/1.9781611972801.75
|View full text |Cite
|
Sign up to set email alerts
|

Fast Implementation of 1 Regularized Learning Algorithms Using Gradient Descent Methods

Abstract: With the advent of high-throughput technologies, ℓ 1 regularized learning algorithms have attracted much attention recently. Dozens of algorithms have been proposed for fast implementation, using various advanced optimization techniques. In this paper, we demonstrate that ℓ 1 regularized learning problems can be easily solved by using gradient-descent techniques. The basic idea is to transform a convex optimization problem with a non-differentiable objective function into an unconstrained non-convex problem, u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2010
2010
2016
2016

Publication Types

Select...
7
1

Relationship

5
3

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 24 publications
0
10
0
Order By: Relevance
“…In this situation, special care must be taken to avoid overfitting problems. A commonly used practice is to select a small feature subset so that the performance of a learning algorithm is optimized ( 21–23 ). For the purpose of this article, we used regularized logistical regression to perform feature selection and classification simultaneously ( 23 ).…”
Section: Methodsmentioning
confidence: 99%
“…In this situation, special care must be taken to avoid overfitting problems. A commonly used practice is to select a small feature subset so that the performance of a learning algorithm is optimized ( 21–23 ). For the purpose of this article, we used regularized logistical regression to perform feature selection and classification simultaneously ( 23 ).…”
Section: Methodsmentioning
confidence: 99%
“…The two steps were iterated until convergence. We used our recently developed gradient-descent-based algorithm to solve the above optimization problem efficiently [26]. By using the fixed-point theory [27], it can be proved that the algorithm converges to a unique solution regardless of the initial weights if the kernel width is properly selected.…”
Section: Supervised Learning Approach To Identifying Cancer Progressimentioning
confidence: 99%
“…We used L1 logistic regression to perform the feature selection procedures due to its ability to dispose the high dimensional data [ 40 ]. The model describes were as follows:…”
Section: Methodsmentioning
confidence: 99%