2021
DOI: 10.48550/arxiv.2102.08352
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stochastic Variance Reduction for Variational Inequality Methods

Abstract: We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman setups. All proposed methods converge in exactly the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results rein… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
48
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(48 citation statements)
references
References 31 publications
0
48
0
Order By: Relevance
“…We show that the proposed algorithm exhibit linear convergence rate with high probability. To the best of our knowledge, this is the first stochastic algorithm with linear rate for the general standard-form LP problems (2). For unconstrained bilinear problems, our restarted scheme improves the complexity of existing linear convergence of stochastic algorithms [2] by a factor of the condition number.…”
Section: Contributionsmentioning
confidence: 99%
See 4 more Smart Citations
“…We show that the proposed algorithm exhibit linear convergence rate with high probability. To the best of our knowledge, this is the first stochastic algorithm with linear rate for the general standard-form LP problems (2). For unconstrained bilinear problems, our restarted scheme improves the complexity of existing linear convergence of stochastic algorithms [2] by a factor of the condition number.…”
Section: Contributionsmentioning
confidence: 99%
“…However, beyond matrix games, their method requires extra assumptions such as bounded domain and involves a three-loop algorithm. More recently, [2] proposes a stochastic extragradient method with variance reduction for solving variational inequalities. Under Euclidean setting, their method is based on a loopless variant of variance-reduced method [27,31].…”
Section: Related Literaturementioning
confidence: 99%
See 3 more Smart Citations