who inspired us all and shared his valuable experiences in the border of academia and industry and cannot be with us today to see the final result of this joint effort.
Benders decomposition uses a strategy of "learning from one's mistakes." The aim of this paper is to extend this strategy to a much larger class of problems. The key is to generalize the linear programming dual used in the classical method to an "inference dual." Solution of the inference dual takes the form of a logical deduction that yields Benders cuts. The dual is therefore very different from other generalized duals that have been proposed. The approach is illustrated by working out the details for propositional satisfiability and 0-1 programming problems. Computational tests are carried out for the latter, but the most promising contribution of logic-based Benders may be to provide a framework for combining optimization and constraint programming methods. *
The competitive nature of most algorithmic experimentation is a source of problems that are all too familiar to the research community. It is hard to make fair comparisons between algorithms and to assemble realistic test problems. Competitive testing tells us which algorithm is faster but not why. Because it requires polished code, it consumes time and energy that could be better spent doing more experiments. This article argues that a more scientific approach of controlled experimentation, similar to that used in other empirical sciences, avoids or alleviates these problems. We have confused research and development; competitive testing is suited only for the latter.Key Words: computational testing, benchmark problems Most experimental studies of heuristic algorithms resemble track meets more than scientific endeavors.Typically an investigator has a bright idea for a new algorithm and wants to show that it works better, in some sense, than known algorithms. This requires computational tests, perhaps on a standard set of benchmark problems. If the new algorithm wins, the work is submitted for publication. Otherwise it is written off as a failure. In short, the whole affair is organized around an algorithmic race whose outcome determines the fame and fate of the contestants.This modus operandi spawns a host of evils that have become depressingly familiar to the algorithmic research community. They are so many and pervasive that even a brief summary requires an entire section of this article. Two, however, are particularly insidious. The emphasis on competition is fundamentally anti-intellectual and does not build the sort of insight that in the long run is conducive to more effective algorithms. It tells us which algorithms are better but not why. The understanding we do accrue generally derives from initial tinkering that takes place in the design stages of the algorithm. Because only the results of the formal competition are exposed to the fight of publication, the observations that are richest in information are too often conducted in an informal, uncontrolled manner. Second, competition diverts time and resources from productive investigation. Countless hours are spent crafting the fastest possible code and finding the best possible parameter settings in order to obtain results that are suitable for publication. This is particularly unfortunate because it squanders a natural advantage of empirical algorithmic work. Most empirical work in other sciences tends to be slow and expensive, requiring well-appointed laboratories, massive equipment, or carefully selected subjects. By contrast, much empirical work on algorithms can be carried out on a work station by a single investigator. This advantage should be exploited by conducting more experiments, rather than by implementing each one in the fastest possible code.
We combine mixed-integer linear programming (MILP) and constraint programming (CP) to solve an important class of planning and scheduling problems. Tasks are allocated to facilities using MILP and scheduled using CP, and the two are linked via logic-based Benders decomposition. Tasks assigned to a facility may run in parallel subject to resource constraints (cumulative scheduling). We solve problems in which the objective is to minimize cost, makespan, or total tardiness. We obtain significant computational speedups, of several orders of magnitude for the first two objectives, relative to the state of the art in both MILP and CP. We also obtain better solutions and bounds for problems than cannot be solved to optimality.
Abstract. The typical constraint store transmits a limited amount of information because it consists only of variable domains. We propose a richer constraint store in the form of a limited-width multivalued decision diagram (MDD). It reduces to a traditional domain store when the maximum width is one but allows greater pruning of the search tree for larger widths. MDD propagation algorithms can be developed to exploit the structure of particular constraints, much as is done for domain filtering algorithms. We propose specialized propagation algorithms for alldiff and inequality constraints. Preliminary experiments show that MDD propagation solves multiple alldiff problems an order of magnitude more rapidly than traditional domain propagation. It also significantly reduces the search tree for inequality problems, but additional research is needed to reduce the computation time.
Second Editionwith any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights.Printed on acid-free paper permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY Springer is part of Springer Science+Business Media (www.springer.com) 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection ISSN 0884-8289 ISBN 978-1-4614-1899-3 e-ISBN 978-1-4614-1900-6 DOI 10.1007/978-1-4614-1900 PrefaceOptimization has become a versatile tool in a wide array of application areas, ranging from manufacturing and information technology to the social sciences. Methods for solving optimization problems are equally numerous and provide a large reservoir of problem-solving technology. In fact, there is such a variety of methods that it is difficult to take full advantage of them. They are described in different technical languages and are implemented in different software packages. Many are not implemented at all. It is hard to tell which one is best for a given problem, and there is too seldom an opportunity to combine techniques that have complementary strengths.The ideal would be to bring these methods under one roof, so that they and their combinations are all available to solve a problem. As it turns out, many of them share, at some level, a common problemsolving strategy. This opens the door to integration-to the design of a modeling and algorithmic framework within which different techniques can work together in a principled way.This book undertakes such a project. It deals primarily with the unification of mathematical programming and constraint programming, since this has been the focus of most recent research on integrated methods. Mathematical programming brings to the table its sophisticated relaxation techniques and concepts of duality. Constraint programming contributes its inference and propagation methods, along with a powerful modeling approach. It is possible to have all of these advantages at once, rather than being forced to choose between them. Continuous global optimization and heuristic methods can also be brought into the framework.The book is intended for those who wish to learn about optimization from an integrated point of view, including researchers, software developers, and practitioners. It is also for postgraduate students interested in a unified treatment of the field. It is written as an advanced textbook, with exercises, that develops optimization concepts from the ground up. It takes an interdisciplinary approach that presupposes mathematical sophistication but no specific knowledge of either mathematical programming or constraint programming.The choice of top...
Revised 18 November 2011 AbstractWe discuss the problem of combining the conflicting objectives of equity and utilitarianism, for social policy making, in a single mathematical programming model. The definition of equity we use is the Rawlsian one of maximising the minimum utility over individuals or classes of individuals. However, when the disparity of utility becomes too great, the objective becomes progressively utilitarian. Such a model is particularly applicable to health provision but to other areas as well. Building a mixed integer/linear programming (MILP) formulation of the problem raises technical issues, as the objective function is nonconvex and the hypograph is not MILP representable in its initial form. We present a succinct formulation and show that it is "sharp" in the sense that its linear programming relaxation describes the convex hull of the feasible set (before extra resource allocation or policy constraints are added). We apply the formulation to a health care planning problem and show that instances of realistic size are easily solved by standard MILP software.
Abstract. Fixed-width MDDs were introduced recently as a more refined alternative for the domain store to represent partial solutions to CSPs. In this work, we present a systematic approach to MDD-based constraint programming. First, we introduce a generic scheme for constraint propagation in MDDs. We show that all previously known propagation algorithms for MDDs can be expressed using this scheme. Moreover, we use the scheme to produce algorithms for a number of other constraints, including Among, Element, and unary resource constraints. Finally, we discuss an implementation of our MDD-based CP solver, and provide experimental evidence of the benefits of MDD-based constraint programming.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.