2022
DOI: 10.48550/arxiv.2202.09674
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generalized Optimistic Methods for Convex-Concave Saddle Point Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 27 publications
0
9
0
Order By: Relevance
“…Our algorithm requires access to an oracle for solving an MVI subproblem (see Definition 3.1) obtained by regularizing the p th -order Taylor series expansion for the operator. This is analogous to the Taylor series oracle from the works on highly-smooth convex optimization , Gasnikov et al, 2019, and identical to the oracle from the Jiang and Mokhtari [2022] work on highly-smooth VIs.…”
Section: Introductionmentioning
confidence: 86%
See 4 more Smart Citations
“…Our algorithm requires access to an oracle for solving an MVI subproblem (see Definition 3.1) obtained by regularizing the p th -order Taylor series expansion for the operator. This is analogous to the Taylor series oracle from the works on highly-smooth convex optimization , Gasnikov et al, 2019, and identical to the oracle from the Jiang and Mokhtari [2022] work on highly-smooth VIs.…”
Section: Introductionmentioning
confidence: 86%
“…We note that for the case of X = R n , d (x ) = x 2 2 and F = ∇f , where f is a p th -order smooth convex function, the above subproblem is equivalent to the subproblem solved by the algorithm of (up to constant factors), which is known to have optimal iteration complexity for highly-smooth convex optimization. Previous works on higher-order smooth MVIs also solve essentially the same subproblem in their algorithms [Jiang and Mokhtari, 2022]. It has been shown by [Jiang and Mokhtari, 2022, Lemma 7.1] that these subproblems are monotone and are guaranteed to have a unique solution, though efficiently finding such a solution in general remains an open problem, even in the case of convex optimization.…”
Section: Algorithmmentioning
confidence: 99%
See 3 more Smart Citations