2021
DOI: 10.48550/arxiv.2107.09721
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Online Projected Gradient Descent for Stochastic Optimization with Decision-Dependent Distributions

Abstract: This paper investigates the problem of tracking solutions of stochastic optimization problems with time-varying costs and decision-dependent distributions. In this context, the paper focuses on the online stochastic gradient descent method, and establishes its convergence to the sequence of optimizers (within a bounded error) in expectation and in high probability.In particular, high-probability convergence results are derived by modeling the gradient error as a sub-Weibull random variable. The theoretical fin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(7 citation statements)
references
References 22 publications
(60 reference statements)
0
7
0
Order By: Relevance
“…For these reasons, we introduce an equilibrium problem associated with (1) for which solutions will be the saddle points of the stationary problem that they induce. These can be seen as the counterparts of the so-called performatively stable points in [18,42,56] in our stochastic minimax setup (1). We first introduce the equilibrium problem in the static (or time invariant) setting.…”
Section: Introductionmentioning
confidence: 92%
See 4 more Smart Citations
“…For these reasons, we introduce an equilibrium problem associated with (1) for which solutions will be the saddle points of the stationary problem that they induce. These can be seen as the counterparts of the so-called performatively stable points in [18,42,56] in our stochastic minimax setup (1). We first introduce the equilibrium problem in the static (or time invariant) setting.…”
Section: Introductionmentioning
confidence: 92%
“…When the projection onto the convex set X t ∩ {x ∈ R n : g(x) ≤ 0} exists in closed form or is cheap to compute, projected gradient methods have been proposed for computing optimizers of (7) [18,42,56]. When the projection is computationally heavy, typical approaches for stationary problems include primaldual methods and interior point methods [40].…”
Section: Constrained Stochastic Minimization Problemsmentioning
confidence: 99%
See 3 more Smart Citations