2020
DOI: 10.48550/arxiv.2009.14552
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Wasserstein Distributionally Robust Inverse Multiobjective Optimization

Abstract: Inverse multiobjective optimization provides a general framework for the unsupervised learning task of inferring parameters of a multiobjective decision making problem (DMP), based on a set of observed decisions from the human experts. However, the performance of this framework relies critically on the selection of appropriate decision making structure, a set of observed decisions that are sufficient and of high qualities, and a parameter space that contains enough information about the DMP. To hedge against t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…Using techniques from distributionally robust optimization [67], they show that for linear forward models, IOP-DD-DRO(ℓ ASO , P N , CVaR, ǫ) can be re-formulated as a large conic optimization problem. More recently, Dong and Zeng [63] build on the distributionally robust framework of Esfahani et al [68] for multi-objective forward optimization models MO((θ, φ)) reduces to the conventional convex forward model FOP-CVX(θ, φ) when considering only a single objective. Dong and Zeng [63] use the expected value risk function ρ P (•) = E P [•] and distance minimization loss ℓ D , meaning that their inverse problem is a multiobjective distributionally robust generalization of the Inverse Distance problem of Aswani et al [17].…”
Section: Distributionally Robust Inverse Optimizationmentioning
confidence: 99%
See 2 more Smart Citations
“…Using techniques from distributionally robust optimization [67], they show that for linear forward models, IOP-DD-DRO(ℓ ASO , P N , CVaR, ǫ) can be re-formulated as a large conic optimization problem. More recently, Dong and Zeng [63] build on the distributionally robust framework of Esfahani et al [68] for multi-objective forward optimization models MO((θ, φ)) reduces to the conventional convex forward model FOP-CVX(θ, φ) when considering only a single objective. Dong and Zeng [63] use the expected value risk function ρ P (•) = E P [•] and distance minimization loss ℓ D , meaning that their inverse problem is a multiobjective distributionally robust generalization of the Inverse Distance problem of Aswani et al [17].…”
Section: Distributionally Robust Inverse Optimizationmentioning
confidence: 99%
“…More recently, Dong and Zeng [63] build on the distributionally robust framework of Esfahani et al [68] for multi-objective forward optimization models MO((θ, φ)) reduces to the conventional convex forward model FOP-CVX(θ, φ) when considering only a single objective. Dong and Zeng [63] use the expected value risk function ρ P (•) = E P [•] and distance minimization loss ℓ D , meaning that their inverse problem is a multiobjective distributionally robust generalization of the Inverse Distance problem of Aswani et al [17]. Since the nominal inverse problem of Aswani et al [17] itself is NP-hard, the distributionally robust formulation is even more difficult to solve.…”
Section: Distributionally Robust Inverse Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Different from the majority of existing literature, [17,18,19] take another perspective to explain the so called "data inconsistency": decision makers are driven by multiple criteria, and different people have different preferences or weights over those criteria, which leads them to make a variety of responses or choices. Then, it can be anticipated that once we remove the variance caused by such multi-criteria decision making from data, their quality or consistency can be greatly improved.…”
Section: Introductionmentioning
confidence: 99%