The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020
DOI: 10.1609/aaai.v34i02.5521
|View full text |Cite
|
Sign up to set email alerts
|

Smart Predict-and-Optimize for Hard Combinatorial Optimization Problems

Abstract: Combinatorial optimization assumes that all parameters of the optimization problem, e.g. the weights in the objective function, are fixed. Often, these weights are mere estimates and increasingly machine learning techniques are used to for their estimation. Recently, Smart Predict and Optimize (SPO) has been proposed for problems with a linear objective function over the predictions, more specifically linear programming problems. It takes the regret of the predictions on the linear problem into account, by rep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 58 publications
(38 citation statements)
references
References 12 publications
0
35
0
Order By: Relevance
“…While this is inevitably computationally more expensive, solving the LP of Model 1 requires low degree polynomial time in the number of items to rank [18], due to the sparsity of its constraints. Fortunately, this issue can be vastly alleviated with the application of hot-starting schemes [14], since the SPO framework relies on iteratively updating a stored solution to each LP instance for slightly different objective coefficients as model weights are updated. Thus, each instance of Model 1 need not be solved from scratch.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…While this is inevitably computationally more expensive, solving the LP of Model 1 requires low degree polynomial time in the number of items to rank [18], due to the sparsity of its constraints. Fortunately, this issue can be vastly alleviated with the application of hot-starting schemes [14], since the SPO framework relies on iteratively updating a stored solution to each LP instance for slightly different objective coefficients as model weights are updated. Thus, each instance of Model 1 need not be solved from scratch.…”
Section: Discussionmentioning
confidence: 99%
“…Fortunately, the Smart Predict-and-Optimize framework is particularly amenable to hot-starting. Since a LP instance for each data sample must be solved at each epoch, a feasible solution to each LP is available from the previous epoch, corresponding to the same constraints and a cost vector which changes based on updates to the DNN model parameters during training [14]. Storing a hot-start solution to each LP instance in a training set requires memory no larger than that of training set, and as the model weights converge, these hot-starts are expected to be very close to the optimal policies for each LP.…”
Section: A Spofr: Implementation Details and Efficiencymentioning
confidence: 99%
“…An extension of the proposed methodology called SPO Trees (SPOTs) for training decision trees under this loss is offered again by Elmachtoub et al in [29]. Another extension to the smart SPO approach is presented by Mandi et al in [30] where the use of SPO to solve a relaxed version of a combinatorial optimization problems is investigated. An application of the smart SPO denoted as "semi SPO" is proposed by [31] for efficient inspection of ships at ports in maritime transportation.…”
Section: Relevant Literaturementioning
confidence: 99%
“…The end-to-end model of [30] learns the constraints of a satisfiability problem by considering a differentiable SDP relaxation of the problem. A similar work [14] trains an ML model by considering a convex surrogate of the task-loss.…”
Section: Related Workmentioning
confidence: 99%