2014
DOI: 10.1137/130921428
|View full text |Cite
|
Sign up to set email alerts
|

Proximal Newton-Type Methods for Minimizing Composite Functions

Abstract: We generalize Newton-type methods for minimizing smooth functions to handle a sum of two convex functions: a smooth function and a nonsmooth function with a simple proximal mapping. We show that the resulting proximal Newton-type methods inherit the desirable convergence behavior of Newton-type methods for minimizing smooth functions, even when search directions are computed inexactly. Many popular methods tailored to problems arising in bioinformatics, signal processing, and statistical learning are special c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
379
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 262 publications
(384 citation statements)
references
References 24 publications
4
379
0
1
Order By: Relevance
“…In the nonlinear optimization setting, the complexity of various unconstrained methods has been derived under exact derivative information [7,8,17], and also under inexact information, where the errors are bounded in a deterministic fashion [3,6,11,14,20]. In all the cases of the deterministic inexact setting, traditional optimization algorithms such as line search, trust region or adaptive regularization algorithms are applied with little modification and work in practice as well as in theory, while the error is assumed to be bounded in some decaying manner at each iteration.…”
Section: Introductionmentioning
confidence: 99%
“…In the nonlinear optimization setting, the complexity of various unconstrained methods has been derived under exact derivative information [7,8,17], and also under inexact information, where the errors are bounded in a deterministic fashion [3,6,11,14,20]. In all the cases of the deterministic inexact setting, traditional optimization algorithms such as line search, trust region or adaptive regularization algorithms are applied with little modification and work in practice as well as in theory, while the error is assumed to be bounded in some decaying manner at each iteration.…”
Section: Introductionmentioning
confidence: 99%
“…In the last few years several algorithmic frameworks for large scale composite convex optimization have been proposed. Examples include active set methods [18], stochastic methods [16], Newton type methods [17], and block coordinate descent methods [29]. In principle, all these algorithmic frameworks could be combined with the multilevel framework developed in this paper.…”
Section: Discussionmentioning
confidence: 99%
“…Substituting in this inequality t 1 by (φ(x k ) − φ * ) and t 2 by (φ(x k+1 ) − φ * ) and using (14) and then (11), one has…”
Section: Propositionmentioning
confidence: 99%
“…Algorithms for minimising composite functions have been extensively investigated and found applications to many problems such as: inverse covariance estimate, logistic regression, sparse least squares and feasibility problems, see e.g. [9,14,15,19] and the references quoted therein.…”
Section: Introductionmentioning
confidence: 99%