2016
DOI: 10.1137/140955495
|View full text |Cite
|
Sign up to set email alerts
|

On the Solution of Stochastic Optimization and Variational Problems in Imperfect Information Regimes

Abstract: Abstract. We consider the solution of a stochastic convex optimization problem E[f (x; θ * , ξ)] over a closed and convex set X in a regime where θ * is unavailable and ξ is a suitably defined random variable. Instead, θ * may be obtained through the solution of a learning problem that requires minimizing a metric E[g(θ; η)] in θ over a closed and convex set Θ. Traditional approaches have been either sequential or direct variational approaches. In the case of the former, this entails the following steps: (i) a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
17
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 25 publications
(18 citation statements)
references
References 42 publications
1
17
0
Order By: Relevance
“…Jiang and Shanbhag [21,22] introduced and studied the JEO problem (Opt(u * ))-(Est) in a stochastic setting, and Ahmadi and Shanbhag [2] examined the deterministic case. In this paper, we consider the deterministic JEO problem, for which [2] provided some remarkable convergence results.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Jiang and Shanbhag [21,22] introduced and studied the JEO problem (Opt(u * ))-(Est) in a stochastic setting, and Ahmadi and Shanbhag [2] examined the deterministic case. In this paper, we consider the deterministic JEO problem, for which [2] provided some remarkable convergence results.…”
Section: Related Workmentioning
confidence: 99%
“…The main problem is that at each step t, the information from the previous steps cannot be utilized, hence they are essentially wasted. To address this, Jiang and Shanbhag [21,22] and Ahmadi and Shanbhag [2] propose a scheme that jointly solves the estimation and optimization problems, which we refer to as JEO. With this scheme, they can efficiently generate a sequence of points x t and u t such that f (x t , u t ) will indeed converge to the desired minimum (Opt(u * )), and give corresponding non-asymptotic error rates.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, the rate of convergence of these methods for functional constrained problems has not been well-understood other than conic constraints even for the deterministic setting. Third, in [16] (and see references therein), Jiang and Shanbhag developed a coupled SA method to solve a stochastic optimization problem with parameters given by another optimization problem, and hence is not applicable to problem (1.4)-(1.5). Moreover, each iteration of their method requires two stochastic subgradient projection steps and hence is more expensive than that of CSPA.…”
mentioning
confidence: 99%
“…To the best of our knowledge, this is the first optimal learning paper that addresses the dual problems of objective maximization with parameter identification for problems with parametric belief models (other literatures with similar dual-objective formulations usually assume that experiments are inexpensive, e.g, [19]). Previous papers only concentrate on discovering the optimal alternative, but in many real world situations, scientists also care about getting accurate estimates of the parameters.…”
mentioning
confidence: 99%