2015
DOI: 10.1007/s10107-015-0941-y
|View full text |Cite
|
Sign up to set email alerts
|

An inexact successive quadratic approximation method for L-1 regularized optimization

Abstract: We study a Newton-like method for the minimization of an objective function φ that is the sum of a smooth convex function and an ℓ 1 regularization term. This method, which is sometimes referred to in the literature as a proximal Newton method, computes a step by minimizing a piecewise quadratic model q k of the objective function φ. In order to make this approach efficient in practice, it is imperative to perform this inner minimization inexactly. In this paper, we give inexactness conditions that guarantee g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

5
92
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 65 publications
(97 citation statements)
references
References 20 publications
5
92
0
Order By: Relevance
“…In the nonlinear optimization setting, the complexity of various unconstrained methods has been derived under exact derivative information [7,8,17], and also under inexact information, where the errors are bounded in a deterministic fashion [3,6,11,14,20]. In all the cases of the deterministic inexact setting, traditional optimization algorithms such as line search, trust region or adaptive regularization algorithms are applied with little modification and work in practice as well as in theory, while the error is assumed to be bounded in some decaying manner at each iteration.…”
Section: Introductionmentioning
confidence: 99%
“…In the nonlinear optimization setting, the complexity of various unconstrained methods has been derived under exact derivative information [7,8,17], and also under inexact information, where the errors are bounded in a deterministic fashion [3,6,11,14,20]. In all the cases of the deterministic inexact setting, traditional optimization algorithms such as line search, trust region or adaptive regularization algorithms are applied with little modification and work in practice as well as in theory, while the error is assumed to be bounded in some decaying manner at each iteration.…”
Section: Introductionmentioning
confidence: 99%
“…Byrd, Nocedal and Oztoprak [5] proposed an Inexact Newton-like Method, we also call it proximal Newton Method. This method compute an inexact solution of the piecewise quadratic model satisfying some inexactness conditions at every iteration, consider the following optimization problem: …”
Section: A Inexactness Conditionmentioning
confidence: 99%
“…Lee, Sun and Saunders [28] presented an inexact proximal Newton method to solve problem(1) and establish several local convergence results. Byrd, Nocedal and Oztoprak [5] proposed an inexact successive quadratic approximation method(SQA), this method also use a quadratic model to approximate the objective function. Instead of compute the exact solution to the quadratic model, SQA compute an approximate solution satisfying some inexactness condition at every iteration.…”
Section: Introductionmentioning
confidence: 99%
“…These include first-order methods, such as ISTA, *Corresponding author. Email: nitishkeskar2012@u.northwestern.edu SpaRSA and FISTA [3,10,37], and proximal Newton methods that compute a step by minimizing a piecewise quadratic model of (1) using (for example) a coordinate descent iteration [5,16,19,23,26,29,32,38]. The proposed algorithm also differs from methods that solve (1) by reformulating it as a bound constrained problem [12,18,27,28,33,35], and from recent methods that are specifically designed for the case when f is a convex quadratic [9,30,34].…”
Section: Introductionmentioning
confidence: 99%
“…Implementations of the proximal Newton method employ adaptive techniques (heuristics or rules based in randomized analysis) [16,26,38], or rules based on an optimality measure [5,19]. Our implementation makes use of the classic termination criteria based on the relative error in the residue of the linear system [22].…”
Section: Introductionmentioning
confidence: 99%