2021
DOI: 10.48550/arxiv.2111.12743
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust Accelerated Primal-Dual Methods for Computing Saddle Points

Abstract: We consider strongly convex/strongly concave saddle point problems assuming we have access to unbiased stochastic estimates of the gradients. We propose a stochastic accelerated primal-dual (SAPD) algorithm and show that SAPD iterate sequence, generated using constant primal-dual step sizes, linearly converges to a neighborhood of the unique saddle point, where the size of the neighborhood is determined by the asymptotic variance of the iterates. Interpreting the asymptotic variance as a measure of robustness … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 31 publications
(69 reference statements)
0
4
0
Order By: Relevance
“…A gap also appears for the SPP with stochastic finite-sum structure [58,82,118]. The stochastic setting with bounded variance was considered in [32,85,125].…”
Section: Recent Advancesmentioning
confidence: 99%
“…A gap also appears for the SPP with stochastic finite-sum structure [58,82,118]. The stochastic setting with bounded variance was considered in [32,85,125].…”
Section: Recent Advancesmentioning
confidence: 99%
“…By using the restart technique we can generalize these results to µ-strongly convex, µ-strongly concave case, see Section 3.3. Alternatively, we can combine the Smoothing technique with Stochastic Accelerated Primal-Dual method from [75] for (12) with f (x, y) being µ x -strongly convex and µ y -strongly concave in 2-norm (Euclidean setup). In this case we obtain the following bounds…”
Section: Saddle-point Problemsmentioning
confidence: 99%
“…Note, that most of the results for saddle-point problems (i.e. mentioned result from [75] or finite-sum composite generalization [69]) with different constants of smoothness and strong convexity/concavity were obtained based on Accelerated gradient method for convex problems and Catalyst envelope, that allows to generalize it to saddle-point problems [47]. There exist also loop-less (direct) accelerated methods that save ln(ε −1 )-factor in the complexity, for µ x -strongly convex, µ y -strongly concave saddle-point problems [40].…”
Section: Saddle-point Problemsmentioning
confidence: 99%
See 1 more Smart Citation