Mathematical programming models with noisy, erroneous, or incomplete data are common in operations research applications. Difficulties with such data are typically dealt with reactively—through sensitivity analysis—or proactively—through stochastic programming formulations. In this paper, we characterize the desirable properties of a solution to models, when the problem data are described by a set of scenarios for their value, instead of using point estimates. A solution to an optimization model is defined as: solution robust if it remains “close” to optimal for all scenarios of the input data, and model robust if it remains “almost” feasible for all data scenarios. We then develop a general model formulation, called robust optimization (RO), that explicitly incorporates the conflicting objectives of solution and model robustness. Robust optimization is compared with the traditional approaches of sensitivity analysis and stochastic linear programming. The classical diet problem illustrates the issues. Robust optimization models are then developed for several real-world applications: power capacity expansion; matrix balancing and image reconstruction; air-force airline scheduling; scenario immunization for financial planning; and minimum weight structural design. We also comment on the suitability of parallel and distributed computer architectures for the solution of robust optimization models.
We propose a new interior point based method to minimize a linear function of a matrix variable subject to linear equality and inequality constraints over the set of positive semidefinite matrices. We show that the approach is very efficient for graph bisection problems, such as max-cut. Other applications include max-min eigenvalue problems and relaxations for the stable set problem.
ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interior-point method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex and general nonlinear programming, since a detailed paper describing these extensions were published recently elsewhere. In particular, we emphasize the importance of establishing and maintaining symmetric quasidefiniteness of the reduced KKT system. We show that the industry standard MPS format can be nicely formulated in such a way to provide quasidefiniteness. Computational results are included for a variety of linear and quadratic programming problems.
We say that a symmetric matrix K is quasi-definite if it has the formwhere E and F are symmetric positive definite matrices. Although such matrices are indefinite, we show that any symmetric permutation of a quasi-definite matrix yields a factorization LDL T . We apply this result to obtain a new approach for solving the symmetric indefinite systems arising in interior-point methods for linear and quadratic programming. These systems are typically solved either by reducing to a positive definite system or by performing a BunchParlett factorization of the full indefinite system at every iteration. Ours is an intermediate approach based on reducing to a quasi-definite system. This approach entails less fill-in than further reducing to a positive definite system but is based on a static ordering and is therefore more efficient than performing Bunch-Parlett factorizations of the original indefinite system.
Abstract. We present a modification of Karmarkar's linear programming algorithm. Our algorithm uses a recentered projected gradient approach thereby obviating a priori knowledge of the optimal objective function value. Assuming primal and dual nondegeneracy, we prove that our algorithm converges. We present computational comparisons between our algorithm and the revised simplex method. For small, dense constraint matrices we saw little difference between the two methods.Key Words. Linear programming, Karmarkar's algorithm, Projected gradient methods, Least squares.1. Introduction. This paper proposes a modification to Karmarkar's original algorithm [6] for solving linear programs. Our algorithm is formulated in the positive orthant instead of the simplex. This makes it easier to conceptualize and leads to computational simplicity. Karmarkar's sliding-objective-function method is replaced by a projected gradient search for the optimum. Empirically, this leads to a decrease in the number of iterations the algorithm requires to solve a problem.In describing our algorithm, we show how to start it, when to stop it, and how to identify infeasibility and unboundedness easily. Assuming primal and dual nondegeneracy, we prove convergence to the optimal solution. (In practice, the algorithm works equally well on problems not satisfying these assumptions.) We also show that duality plays an important role.Finally, we present results comparing the performance of our algorithm with the revised simplex method for problems with fewer than 200 variables and randomly generated, dense constraint matrices. For this class of problems we saw little difference between the two methods. However, our results may not be indicative of the behavior for large sparse problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.