2017
DOI: 10.48550/arxiv.1702.04783
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Online Convex Optimization with Time-Varying Constraints

Abstract: This paper considers online convex optimization with time-varying constraint functions. Specifically, we have a sequence of convex objective functions {ft(x)} ∞ t=0 and convex constraint functions {gt,i(x)} ∞ t=0 for i ∈ {1, ..., k}. The functions are gradually revealed over time. For a given ǫ > 0, the goal is to choose points xt every step t, without knowing the ft and gt,i functions on that step, to achieve a time average at most ǫ worse than the best fixed-decision that could be chosen with hindsight, subj… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
27
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(28 citation statements)
references
References 8 publications
1
27
0
Order By: Relevance
“…Fig. 2 shows that our proposed algorithms have smaller average cumulative constraint violation, which also matches the theoretical results since the standard constraint violation metric rather than the stricter metric was used in [23]- [25], [29].…”
Section: Simulationssupporting
confidence: 80%
See 3 more Smart Citations
“…Fig. 2 shows that our proposed algorithms have smaller average cumulative constraint violation, which also matches the theoretical results since the standard constraint violation metric rather than the stricter metric was used in [23]- [25], [29].…”
Section: Simulationssupporting
confidence: 80%
“…In order to avoid using the upper bounds of the loss and constraint functions and their subgradients to design the algorithm parameters α t , β t , and γ t , inspired by the algorithms proposed in [24], [25], [50], we slightly modify the dual updating rule (10) as (14). As a result, the updating rule ( 8)-( 9) can be executed in a distributed manner, which is given in pseudocode as Algorithm 1.…”
Section: A Algorithm Descriptionmentioning
confidence: 99%
See 2 more Smart Citations
“…xii,t := xi,t. Motivated by the algorithm proposed in [25], by modifying ( 18) and ( 19), a distributed online primal-dual dynamic mirror descent algorithm as in Algorithm 1 is designed to learn the variational GNE of the time-varying game Γ(V, Ωt, Jt) under partial-decision information. In order to execute Algorithm 1, at each time slot t, every player i needs to know ∇iJi,t(xi,t), gi,t(xi,t) and ∇gi,t(xi,t) rather than the full information of Ji,t and gi,t, which is similar to most online algorithms for optimization and games [16], [26]- [30].…”
Section: Resultsmentioning
confidence: 99%