2014
DOI: 10.1137/130919362
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Primal-Dual Methods for a Class of Saddle Point Problems

Abstract: We present a novel accelerated primal-dual (APD) method for solving a class of deterministic and stochastic saddle point problems (SPP). The basic idea of this algorithm is to incorporate a multi-step acceleration scheme into the primaldual method without smoothing the objective function. For deterministic SPP, the APD method achieves the same optimal rate of convergence as Nesterov's smoothing technique. Our stochastic APD method exhibits an optimal rate of convergence for stochastic SPP not only in terms of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
172
0
1

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 190 publications
(176 citation statements)
references
References 42 publications
3
172
0
1
Order By: Relevance
“…. , N − 1 do We may also linearize Bw t − Kx − b 2 , and generate x t+1 by 13) as discussed in [18,10]. This variant is called the preconditioned ADMM (P-ADMM).…”
Section: Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…. , N − 1 do We may also linearize Bw t − Kx − b 2 , and generate x t+1 by 13) as discussed in [18,10]. This variant is called the preconditioned ADMM (P-ADMM).…”
Section: Algorithmmentioning
confidence: 99%
“…It should be noted that all the methods in the above list require more assumptions on the AECCO and UCO problems (e.g., simplicity of G(·), strong convexity of G(·) or F (·)), in comparison with Nesterov's smoothing scheme. More recently, we proposed an accelerated primal-dual (APD) method for solving the UCO problem [13], which has the same optimal rate of convergence (1.16) as that of Nesterov's smoothing scheme in [40]. The advantage of the APD method over Nesterov's smoothing scheme is that it does not require boundedness on either X or Y .…”
Section: Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…The linearization idea can also be adopted in the primal-dual framework to overcome the non-simplicity of F(x) term (1.1). In particular, Chen et al [12] developed an accelerated scheme using the Nesterov's idea [34][35][36] …”
Section: Related Workmentioning
confidence: 99%
“…These assumptions are usually presented in the form of Lipschitz continuity. Traditionally, convergence properties of most first order methods [4,24,30] including gradient methods [19,31,32] are derived by imposing some Lipschitz continuity assumptions. For a smooth function, it is assumed on the magnitude of its gradient, while for a nonsmooth objective function it is imposed on its values.…”
Section: Introductionmentioning
confidence: 99%