2020
DOI: 10.48550/arxiv.2006.08484
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A generic adaptive restart scheme with applications to saddle point algorithms

Abstract: We provide a simple and generic adaptive restart scheme for convex optimization that is able to achieve worst-case bounds matching (up to constant multiplicative factors) optimal restart schemes that require knowledge of problem specific constants. The scheme triggers restarts whenever there is sufficient reduction of a distance-based potential function. This potential function is always computable. We apply the scheme to obtain the first adaptive restart algorithm for saddle-point algorithms including primal-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 20 publications
(28 reference statements)
0
3
0
Order By: Relevance
“…While this could be acceptable in the strongly convex case, for more complex schemes to leverage, e.g., sharpness, this is unacceptable as the required parameters are hard to estimate and generally inaccessible. This however, can be remedied in the case of sharpness, at the cost of an extra O(log 2 )-factor in the rates, via scheduled restarts as done in [39] that do not require sharpness parameters as input or when an error bound (of similar convergence rate) is available as in the case of conditional gradients [27]; see also [23] for a very recent adaptive restart scheme using error bounds estimators.…”
Section: Related Approachesmentioning
confidence: 99%
See 2 more Smart Citations
“…While this could be acceptable in the strongly convex case, for more complex schemes to leverage, e.g., sharpness, this is unacceptable as the required parameters are hard to estimate and generally inaccessible. This however, can be remedied in the case of sharpness, at the cost of an extra O(log 2 )-factor in the rates, via scheduled restarts as done in [39] that do not require sharpness parameters as input or when an error bound (of similar convergence rate) is available as in the case of conditional gradients [27]; see also [23] for a very recent adaptive restart scheme using error bounds estimators.…”
Section: Related Approachesmentioning
confidence: 99%
“…On the downside, restarts often explicitly depend on parameters arising from the additional structure under consideration and obtained guarantees are off by some constant factor or even log factor. The former can be often remedied with adaptive or scheduled restarts (see e.g., [39,23]) albeit with some minor cost. This way we can obtain fully adaptive algorithms that adapt to additional structure without knowing the accompanying parameters explicitly.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation