2020
DOI: 10.1007/s10107-020-01490-5
|View full text |Cite
|
Sign up to set email alerts
|

Inexact stochastic mirror descent for two-stage nonlinear stochastic programs

Abstract: We introduce an inexact variant of Stochastic Mirror Descent (SMD), called Inexact Stochastic Mirror Descent (ISMD), to solve nonlinear two-stage stochastic programs where the second stage problem has linear and nonlinear coupling constraints and a nonlinear objective function which depends on both first and second stage decisions. Given a candidate first stage solution and a realization of the second stage random vector, each iteration of ISMD combines a stochastic subgradient descent using a prox-mapping wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
26
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 14 publications
(27 citation statements)
references
References 18 publications
0
26
1
Order By: Relevance
“…Proof of formula (2.48). We prove (2.48) adapting the proof of Lemma 2.1 in [6] to the special case of value function…”
Section: End If End For End Formentioning
confidence: 94%
See 1 more Smart Citation
“…Proof of formula (2.48). We prove (2.48) adapting the proof of Lemma 2.1 in [6] to the special case of value function…”
Section: End If End For End Formentioning
confidence: 94%
“…When X t is polyhedral, formula (2.48) follows from Duality for linear programming. For a more general convex set X t , formula (2.48) directly follows from applying to value function Q k−1 t Lemma 2.1 in [3] or Proposition 3.2 in [6] which respectively provide a characterization of the subdifferential and subgradients for value functions of general convex optimization problems (whose argument is in the objective function and in linear and nonlinear coupling constraints of the corresponding optimization problem). For the interested reader and for the sake of completeness, we provide in the Appendix a proof of relation (2.48) specializing to the particular case of value function Q k−1 t the proof of Lemma 2.1 in [3].…”
Section: Computation Of the Subgradient Inmentioning
confidence: 99%
“…This task can be easily achieved for value functions of linear programs, see for instance Proposition 2.1 in [13]. For nonlinear differentiable problems, the derivation of inexact cuts is given in Propositions 2.2 and 2.3 in [13] and Proposition 3.8 in [10]. However, this task is more delicate for nondifferentiable optimization problems.…”
mentioning
confidence: 99%
“…Due to (H0) value function Q is convex and if x ∈ ri(dom(Q)) then Q is subdifferentiable at x and there exists a cut (a lower bounding affine function) for Q at x which coincides with Q at x. More generally, under some assumptions, the characterization of the subdifferential of Q at x ∈ X was given in [12, Lemma 2.1] and formulas for affine lower bounding functions for Q were derived in [10,Proposition 3.2] on the basis of optimal primal-dual solutions to (1.1). When only approximate primal-dual solutions are available, we can only compute inexact cuts which are still lower bounding functions for the value function but which do not coincide with this function at the point x used to compute the cut.…”
mentioning
confidence: 99%
“…Duality is also a fundamental tool in the reformulation of Robust Optimization problems, see for instance [3]. Finally, derivatives of the value function of classes of optimization problems can be related to optimal dual solutions, see [5], [22] and more recently [8,10,11] for the characterization of subdifferentials, subgradients, and ε-subgradients of value functions of convex optimization problems.…”
mentioning
confidence: 99%