2022
DOI: 10.1109/tnnls.2021.3087579
|View full text |Cite
|
Sign up to set email alerts
|

Proximal Online Gradient Is Optimum for Dynamic Regret: A General Lower Bound

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 17 publications
1
12
0
Order By: Relevance
“…where T is the maximal number of rounds, and D is the given budget of dynamics. But, the dynamic regret for an OCO method is O √ T D + √ T , which is same with the case of no switching cost [20,21,50,51]. Furthermore, we provide a lower bound of dynamic regret, namely Ω √ T D + √ T for the OCO se ing.…”
Section: Introductionmentioning
confidence: 58%
“…where T is the maximal number of rounds, and D is the given budget of dynamics. But, the dynamic regret for an OCO method is O √ T D + √ T , which is same with the case of no switching cost [20,21,50,51]. Furthermore, we provide a lower bound of dynamic regret, namely Ω √ T D + √ T for the OCO se ing.…”
Section: Introductionmentioning
confidence: 58%
“…We now separately bound the terms on the right-hand side of (13). In order to bound the first term of the above inequality, we add and subtract several terms as follows:…”
Section: B Proof Of Theoremmentioning
confidence: 99%
“…where the last line follows from the fact that D r (x, y) is nonnegative when r(x) is convex, and the Lipschitz condition stated in (7). We now proceed to bound the other term on the right-hand side of (13). We add and subtract f t+1 (x t+1 ) to obtain…”
Section: B Proof Of Theoremmentioning
confidence: 99%
See 1 more Smart Citation
“…It is well known that assuming convexity of the c k and boundedness of the gradients, online gradient descent achieves O( √ K) regret bound and this bound is improved to O log(K) assuming strong convexity of the c k [9]. More general classes of OCO algorithms have been studied [11,21], notably (accelerated) proximal gradient descent algorithms concerned about composite convex functions of the form φ k = f k + g where only the f k are smooth. Improved regret bounds again hold assuming strong convexity of the φ k .…”
Section: Connection With Online Convex Optimizationmentioning
confidence: 99%