2014
DOI: 10.21236/ada595588
|View full text |Cite
|
Sign up to set email alerts
|

An Accelerated Linearized Alternating Direction Method of Multipliers

Abstract: Abstract. We present a novel framework, namely AADMM, for acceleration of linearized alternating direction method of multipliers (ADMM). The basic idea of AADMM is to incorporate a multi-step acceleration scheme into linearized ADMM. We demonstrate that for solving a class of convex composite optimization with linear constraints, the rate of convergence of AADMM is better than that of linearized ADMM, in terms of their dependence on the Lipschitz constant of the smooth component. Moreover, AADMM is capable to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
258
0
13

Year Published

2015
2015
2020
2020

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 99 publications
(275 citation statements)
references
References 45 publications
4
258
0
13
Order By: Relevance
“…The early iteration speed of BARISTA is due to its tight approximation of the Hessian of the cost function via the diagonal majorizers developed in this paper and the use of Nesterov momentum acceleration. Nesterov momentum has been added to AL algorithms in some cases [22], although those algorithms require an estimate of the Lipschitz constant, so the diagonal majorizers presented here may be useful for those methods.…”
Section: Discussionmentioning
confidence: 99%
“…The early iteration speed of BARISTA is due to its tight approximation of the Hessian of the cost function via the diagonal majorizers developed in this paper and the use of Nesterov momentum acceleration. Nesterov momentum has been added to AL algorithms in some cases [22], although those algorithms require an estimate of the Lipschitz constant, so the diagonal majorizers presented here may be useful for those methods.…”
Section: Discussionmentioning
confidence: 99%
“…Deterministic ADMM [163] Distributed computation Fast convergence Many samples needed per iteration to deal with stochasticity Stochastic ADMM [159], [160], [186] Stochastic and distributed computation…”
Section: Seldom Achieving Global Optimummentioning
confidence: 99%
“…Instead, we use “preconditioned” ADMM (PADMM) [61], [62] accelerated 1 using Nesterov momentum [64], essentially using a single FISTA step as the x -update in (12): xisoftfalse(zi-1-11cAfalse(bold-italicAzi-1-ui-1+bi-1false);1βμcfalse). tifalse(1+1+4false(ti-1false)2false)/2. zixi+1ti-1-1tifalse(xi-xi-1false).…”
Section: Solving the Majorized Objective With Admmmentioning
confidence: 99%