2017
DOI: 10.2139/ssrn.3073057
|View full text |Cite
|
Sign up to set email alerts
|

Calibration of Distributionally Robust Empirical Optimization Models

Abstract: In this paper, we study the out-of-sample properties of robust empirical optimization and develop a theory for data-driven calibration of the "robustness parameter" for worst-case maximization problems with concave reward functions. Building on the intuition that robust optimization reduces the sensitivity of the expected reward to errors in the model by controlling the spread of the reward distribution, we show that the first-order benefit of "little bit of robustness" is a significant reduction in the varian… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 48 publications
(73 reference statements)
0
8
0
Order By: Relevance
“…In particular, as we discuss in the case of LASSO, according to our results corresponding to contribution E), RWPI based prescription of the size of uncertainty actually can be shown (under suitable regularity conditions) to decay at rate O( log d/n) (uniformly over d and n such that log 2 d n), which is in agreement with the findings of high-dimensional statistics literature (see [13,30,3] and references therein). A profile function based approach towards calibrating the radius of uncertainty in the context of empirical likelihood based DRO can be found in [27,15,21,26].…”
Section: 3mentioning
confidence: 99%
“…In particular, as we discuss in the case of LASSO, according to our results corresponding to contribution E), RWPI based prescription of the size of uncertainty actually can be shown (under suitable regularity conditions) to decay at rate O( log d/n) (uniformly over d and n such that log 2 d n), which is in agreement with the findings of high-dimensional statistics literature (see [13,30,3] and references therein). A profile function based approach towards calibrating the radius of uncertainty in the context of empirical likelihood based DRO can be found in [27,15,21,26].…”
Section: 3mentioning
confidence: 99%
“…Putting aside the technical differences in using L-DRO versus the common DRO in (18) (the latter requires an extra layer of analysis in the Lagrangian reformulation), we point out two conceptual distinctions between Gotoh et al (2018Gotoh et al ( , 2021 and our results in Section 2 and 3.2. First is the criterion in measuring the quality of an obtained solution x, in particular the role of the variance of the loss function h(x, ξ), V ar P (h(x, ξ)).…”
Section: Bias-variance Tradeoffmentioning
confidence: 87%
“…In particular, any risk-aware consideration should be already incorporated into the construction of the loss h. When an obtained solution x is used in many future test cases, an estimate of Z(x), using n test test data points, has a variance given by V ar P (h(x, ξ))/n test (instead of V ar P (h(x, ξ))), and thus the variance of h plays a relatively negligible role. This is different from Gotoh et al (2018Gotoh et al ( , 2021 who takes an alternate view that puts more weight on the variability of the loss function.…”
Section: Bias-variance Tradeoffmentioning
confidence: 89%
See 2 more Smart Citations