Abstract:Most inverse optimization models impute unspecified parameters of an objective function to make an observed solution optimal for a given optimization problem with a fixed feasible set. We propose two approaches to impute unspecified left-hand-side constraint coefficients in addition to a cost vector for a given linear optimization problem. The first approach identifies parameters minimizing the duality gap, while the second minimally perturbs prior estimates of the unspecified parameters to satisfy strong dual… Show more
“…While there are a few studies that focus on estimating constraint parameters (Güler and Hamacher 2010, Birge et al 2017, Chan and Kaw 2020, the vast majority of papers focus on estimating the cost vector. Our focus in this paper is also on estimating the cost vector.…”
Inverse optimization -determining parameters of an optimization problem that render a given solution optimal -has received increasing attention in recent years. While significant inverse optimization literature exists for convex optimization problems, there have been few advances for discrete problems, despite the ubiquity of applications that fundamentally rely on discrete decision-making. In this paper, we present a new set of theoretical insights and algorithms for the general class of inverse mixed integer linear optimization problems. Our theoretical results establish a new characterization of optimality conditions, defined as certificate sets, which are leveraged to design new types of cutting plane algorithms using trust regions. Through an extensive set of computational experiments, we show that our methods provide substantial improvements over existing methods in solving the largest and most difficult instances to date.
“…While there are a few studies that focus on estimating constraint parameters (Güler and Hamacher 2010, Birge et al 2017, Chan and Kaw 2020, the vast majority of papers focus on estimating the cost vector. Our focus in this paper is also on estimating the cost vector.…”
Inverse optimization -determining parameters of an optimization problem that render a given solution optimal -has received increasing attention in recent years. While significant inverse optimization literature exists for convex optimization problems, there have been few advances for discrete problems, despite the ubiquity of applications that fundamentally rely on discrete decision-making. In this paper, we present a new set of theoretical insights and algorithms for the general class of inverse mixed integer linear optimization problems. Our theoretical results establish a new characterization of optimality conditions, defined as certificate sets, which are leveraged to design new types of cutting plane algorithms using trust regions. Through an extensive set of computational experiments, we show that our methods provide substantial improvements over existing methods in solving the largest and most difficult instances to date.
“…Naturally, such a general-purpose approach will not be the method of choice for all classes of IO problems. In particular, for non-parametric linear programs, closed-form solutions for learning the c vector ( Figure 1 (i)) and for learning the constraint coefficients have been derived by Chan et al [12,14] and Chan and Kaw [13], respectively. However, learning objective and constraint coefficients jointly (Figure 1 (ii)) has, to date, received little attention.…”
Given a set of observations generated by an optimization process, the goal of inverse optimization is to determine likely parameters of that process. We cast inverse optimization as a form of deep learning. Our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn parameters that generate the observations. We demonstrate that by backpropagating through the interior point algorithm we can learn the coefficients determining the cost vector and the constraints, independently or jointly, for both non-parametric and parametric linear programs, starting from one or multiple observations. With this approach, inverse optimization can leverage concepts and algorithms from deep learning.
“…The inverse optimization methodology is an idea to find the optimal parameters of a forward optimization model by building an inverse optimization model in the situation that the values of the forward model's decision variables are observed but its parameters are unknown [29]. It is usually employed to investigate the mechanism of an unavailable system [30] or recover an individual's parameter in a multiplayer game [31].…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.