The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2019
DOI: 10.1007/978-3-030-19212-9_16
|View full text |Cite
|
Sign up to set email alerts
|

An Investigation into Prediction + Optimisation for the Knapsack Problem

Abstract: We study a prediction+optimisation formulation of the knapsack problem. The goal is to predict the profits of knapsack items based on historical data, and afterwards use these predictions to solve the knapsack. The key is that the item profits are not known beforehand and thus must be estimated, but the quality of the solution is evaluated with respect to the true profits. We formalise the problem, the goal of minimising expected regret and the learning problem, and investigate different machine learning appro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 21 publications
(28 citation statements)
references
References 21 publications
(33 reference statements)
0
28
0
Order By: Relevance
“…It is important to note that, similar to the approach proposed by Selsam and Bjørner [23] -and with works in the predict-and-optimise paradigm [6,8] -our ambition is not to achieve the best possible ML predictions. The reason for this is that more accurate predictions do not necessarily imply that they are more useful for the solver; rather the metric to optimise is the runtime of the solver.…”
Section: Approachmentioning
confidence: 99%
“…It is important to note that, similar to the approach proposed by Selsam and Bjørner [23] -and with works in the predict-and-optimise paradigm [6,8] -our ambition is not to achieve the best possible ML predictions. The reason for this is that more accurate predictions do not necessarily imply that they are more useful for the solver; rather the metric to optimise is the runtime of the solver.…”
Section: Approachmentioning
confidence: 99%
“…As a consequence, the ML models do not account for the optimization tasks (Wang et al 2006;Mukhopadhyay et al 2017). In recent years there is a growing interest in decision-focused learning (Elmachtoub and Grigas 2017;Demirović et al 2019;, that aims to couple ML and decision making.…”
Section: Related Workmentioning
confidence: 99%
“…Demirović et al (2019) investigate the predic-tion+optimisation problem for the knapsack problem, and prove that optimizing over predictions are as valid as stochastic optimisation over learned distributions, in case the predictions are used as weights in a linear objective. They further investigate possible learning approaches, and classified them into three groups: indirect approaches, which do not use knowledge of the optimisation problem; semi-direct approaches, which encode knowledge of the optimisation problem, such as the importance of ranking and direct approaches which encode or use the optimisation problem in the learning in some way (Demirović et al 2019). Our approach is a direct approach and we examine how to combine the best of such techniques in order to scale to large and hard combinatorial problems.…”
Section: Related Workmentioning
confidence: 99%
“…A common issue in data-driven optimization is that using customary ML error metrics may not lead to good solutions of the optimization problem (see, for example, [14,17]). We tackled this issue by comparing the classical Mean Absolute Error, MAE S = i∈S |p i −p i |, where p i = p i (f, c) andp i =p i (f, c), to the custom metric cMAE S (δ) = i∈S loss i , where…”
Section: Pmlp Experimental Setupmentioning
confidence: 99%