Abstract:We study a prediction+optimisation formulation of the knapsack problem. The goal is to predict the profits of knapsack items based on historical data, and afterwards use these predictions to solve the knapsack. The key is that the item profits are not known beforehand and thus must be estimated, but the quality of the solution is evaluated with respect to the true profits. We formalise the problem, the goal of minimising expected regret and the learning problem, and investigate different machine learning appro… Show more
“…It is important to note that, similar to the approach proposed by Selsam and Bjørner [23] -and with works in the predict-and-optimise paradigm [6,8] -our ambition is not to achieve the best possible ML predictions. The reason for this is that more accurate predictions do not necessarily imply that they are more useful for the solver; rather the metric to optimise is the runtime of the solver.…”
Contemporary research explores the possibilities of integrating machine learning (ML) approaches with traditional combinatorial optimisation solvers. Since optimisation hybrid solvers, which combine propositional satisfiability (SAT) and constraint programming (CP), dominate recent benchmarks, it is surprising that the literature has paid limited attention to machine learning approaches for hybrid CP-SAT solvers. We identify the technique of minimal unsatisfiable subsets as promising to improve the performance of the hybrid CP-SAT lazy clause generation solver Chuffed. We leverage a graph convolutional network (GCN) model, trained on an adapted version of the MiniZinc benchmark suite. The GCN predicts which variables belong to an unsatisfiable subset on CP instances; these predictions are used to initialise the activity score of Chuffed's Variable-State Independent Decaying Sum (VSIDS) heuristic. We benchmark the ML-aided Chuffed on the MiniZinc benchmark suite and find a robust 2.5% gain over baseline Chuffed on MRCPSP instances. This paper thus presents the first, to our knowledge, successful application of machine learning to improve hybrid CP-SAT solvers, a step towards improved automatic solving of CP models.
“…It is important to note that, similar to the approach proposed by Selsam and Bjørner [23] -and with works in the predict-and-optimise paradigm [6,8] -our ambition is not to achieve the best possible ML predictions. The reason for this is that more accurate predictions do not necessarily imply that they are more useful for the solver; rather the metric to optimise is the runtime of the solver.…”
Contemporary research explores the possibilities of integrating machine learning (ML) approaches with traditional combinatorial optimisation solvers. Since optimisation hybrid solvers, which combine propositional satisfiability (SAT) and constraint programming (CP), dominate recent benchmarks, it is surprising that the literature has paid limited attention to machine learning approaches for hybrid CP-SAT solvers. We identify the technique of minimal unsatisfiable subsets as promising to improve the performance of the hybrid CP-SAT lazy clause generation solver Chuffed. We leverage a graph convolutional network (GCN) model, trained on an adapted version of the MiniZinc benchmark suite. The GCN predicts which variables belong to an unsatisfiable subset on CP instances; these predictions are used to initialise the activity score of Chuffed's Variable-State Independent Decaying Sum (VSIDS) heuristic. We benchmark the ML-aided Chuffed on the MiniZinc benchmark suite and find a robust 2.5% gain over baseline Chuffed on MRCPSP instances. This paper thus presents the first, to our knowledge, successful application of machine learning to improve hybrid CP-SAT solvers, a step towards improved automatic solving of CP models.
“…As a consequence, the ML models do not account for the optimization tasks (Wang et al 2006;Mukhopadhyay et al 2017). In recent years there is a growing interest in decision-focused learning (Elmachtoub and Grigas 2017;Demirović et al 2019;, that aims to couple ML and decision making.…”
Section: Related Workmentioning
confidence: 99%
“…Demirović et al (2019) investigate the predic-tion+optimisation problem for the knapsack problem, and prove that optimizing over predictions are as valid as stochastic optimisation over learned distributions, in case the predictions are used as weights in a linear objective. They further investigate possible learning approaches, and classified them into three groups: indirect approaches, which do not use knowledge of the optimisation problem; semi-direct approaches, which encode knowledge of the optimisation problem, such as the importance of ranking and direct approaches which encode or use the optimisation problem in the learning in some way (Demirović et al 2019). Our approach is a direct approach and we examine how to combine the best of such techniques in order to scale to large and hard combinatorial problems.…”
Combinatorial optimization assumes that all parameters of the optimization problem, e.g. the weights in the objective function, are fixed. Often, these weights are mere estimates and increasingly machine learning techniques are used to for their estimation. Recently, Smart Predict and Optimize (SPO) has been proposed for problems with a linear objective function over the predictions, more specifically linear programming problems. It takes the regret of the predictions on the linear problem into account, by repeatedly solving it during learning. We investigate the use of SPO to solve more realistic discrete optimization problems. The main challenge is the repeated solving of the optimization problem. To this end, we investigate ways to relax the problem as well as warm-starting the learning and the solving. Our results show that even for discrete problems it often suffices to train by solving the relaxation in the SPO loss. Furthermore, this approach outperforms the state-of-the-art approach of Wilder, Dilkina, and Tambe. We experiment with weighted knapsack problems as well as complex scheduling problems, and show for the first time that a predict-and-optimize approach can successfully be used on large-scale combinatorial optimization problems.
“…A common issue in data-driven optimization is that using customary ML error metrics may not lead to good solutions of the optimization problem (see, for example, [14,17]). We tackled this issue by comparing the classical Mean Absolute Error, MAE S = i∈S |p i −p i |, where p i = p i (f, c) andp i =p i (f, c), to the custom metric cMAE S (δ) = i∈S loss i , where…”
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.