2021
DOI: 10.48550/arxiv.2108.08887
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Risk Bounds and Calibration for a Smart Predict-then-Optimize Method

Abstract: The predict-then-optimize framework is fundamental in practical stochastic decision-making problems: first predict unknown parameters of an optimization model, then solve the problem using the predicted values. A natural loss function in this setting is defined by measuring the decision error induced by the predicted parameters, which was named the Smart Predict-then-Optimize (SPO) loss by Elmachtoub and Grigas [2021]. Since the SPO loss is typically nonconvex and possibly discontinuous, Elmachtoub and Grigas … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 15 publications
(26 reference statements)
0
2
0
Order By: Relevance
“…Remarkably, risk bounds about the SPO+ loss relative to the SPO loss can be derived [13], and the empirical minimizer of the SPO+ loss is shown to achieve low excess true risk with high probability. Note that, by definition, 𝒚 ⊤ Π * (𝒚) ≥ 𝒚 ⊤ Π * ( ŷ) and therefore L (𝒚, ŷ) ≥ 0.…”
Section: Regret Loss and Spo Trainingmentioning
confidence: 99%
“…Remarkably, risk bounds about the SPO+ loss relative to the SPO loss can be derived [13], and the empirical minimizer of the SPO+ loss is shown to achieve low excess true risk with high probability. Note that, by definition, 𝒚 ⊤ Π * (𝒚) ≥ 𝒚 ⊤ Π * ( ŷ) and therefore L (𝒚, ŷ) ≥ 0.…”
Section: Regret Loss and Spo Trainingmentioning
confidence: 99%
“…Balghiti et al (2019) later provide finite-sample performance guarantee of the SPO loss in the form of generalization bounds. Recently, Liu and Grigas (2021) have strengthened the consistency of SPO+ by providing risk guarantees and a calibration analysis in the polyhedral and strongly convex cases. Elmachtoub et al (2020) propose a method to train decision trees using the SPO loss and demonstrate its excellent numerical performance and lower model complexity.…”
Section: Relevant Literaturementioning
confidence: 99%