2018
DOI: 10.48550/arxiv.1810.12997
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Online-Learning Approach to Inverse Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 0 publications
1
7
0
Order By: Relevance
“…The suboptimality loss measures how well the predicted ĉnew explains the realized optimal solution x * new . It is more commonly adopted in the inverse linear programming literature (Mohajerin Esfahani et al, 2018;Bärmann et al, 2018;Chen and Kılınç-Karzan, 2020).…”
Section: Predict-then-optimize/contextual Linear Programmingmentioning
confidence: 99%
“…The suboptimality loss measures how well the predicted ĉnew explains the realized optimal solution x * new . It is more commonly adopted in the inverse linear programming literature (Mohajerin Esfahani et al, 2018;Bärmann et al, 2018;Chen and Kılınç-Karzan, 2020).…”
Section: Predict-then-optimize/contextual Linear Programmingmentioning
confidence: 99%
“…The estimand in our project-the acquisition function of a Bayesian optimization-is analogous to the risk preferences estimated in Li's paper. Additionally, the sequential nature of the learning problem we study relates to online learning in inverse optimization (Bärmann et al, 2018;Dong et al, 2018;Dong and Zeng, 2020) and inverse Markov decision processes (Erkin et al, 2010).…”
Section: Related Workmentioning
confidence: 99%
“…Saez-Gallego and Morales [2017] jointly learn c and b which are affine functions of u. Bärmann et al [2017Bärmann et al [ , 2020 and Dong et al [2018] study online versions of inverse linear and convex optimization, respectively, learning a sequence of cost functions where the feasible set for each observation are assumed to be fully-specified. Tan et al [2019] proposed a gradient-based approach for learning cost and constraints of a PLP, by 'unrolling' a barrier interior point solver and backpropagating through it.…”
Section: Related Workmentioning
confidence: 99%