Abstract.JuMP is an open-source modeling language that allows users to express a wide range of optimization problems (linear, mixed-integer, quadratic, conic-quadratic, semidefinite, and nonlinear) in a high-level, algebraic syntax. JuMP takes advantage of advanced features of the Julia programming language to offer unique functionality while achieving performance on par with commercial modeling tools for standard tasks. In this work we will provide benchmarks, present the novel aspects of the implementation, and discuss how JuMP can be extended to new problem classes and composed with state-of-the-art tools for visualization and interactivity.
x2 x1 y x2 x1 y Fig. 1: The convex relaxation for a ReLU neuron using: (Left) existing MIP formulations, and (Right) the formulations presented in this paper.Abstract. We present an ideal mixed-integer programming (MIP) formulation for a rectified linear unit (ReLU) appearing in a trained neural network. Our formulation requires a single binary variable and no additional continuous variables beyond the input and output variables of the ReLU. We contrast it with an ideal "extended" formulation with a linear number of additional continuous variables, derived through standard techniques. An apparent drawback of our formulation is that it requires an exponential number of inequality constraints, but we provide a routine to separate the inequalities in linear time. We also prove that these exponentially-many constraints are facet-defining under mild conditions. Finally, we study network verification problems and observe that dynamically separating from the exponential inequalities 1) is much more computationally efficient and scalable than the extended formulation, 2) decreases the solve time of a state-of-the-art MIP solver by a factor of 7 on smaller instances, and 3) nearly matches the dual bounds of a state-of-the-art MIP solver on harder instances, after just a few rounds of separation and in orders of magnitude less time.
We present an auto-tuning system for optimizing I/O performance of HDF5 applications and demonstrate its value across platforms, applications, and at scale. The system uses a genetic algorithm to search a large space of tunable parameters and to identify effective settings at all layers of the parallel I/O stack. The parameter settings are applied transparently by the auto-tuning system via dynamically intercepted HDF5 calls.To validate our auto-tuning system, we applied it to three I/O benchmarks (VPIC, VORPAL, and GCRM) that replicate the I/O activity of their respective applications. We tested the system with different weak-scaling configurations (128, 2048, and 4096 CPU cores) that generate 30 GB to 1 TB of data, and executed these configurations on diverse HPC platforms (Cray XE6, IBM BG/P, and Dell Cluster). In all cases, the auto-tuning framework identified tunable parameters that substantially improved write performance over default system settings. We consistently demonstrate I/O write speedups between 2x and 100x for test configurations.
An important problem in optimization is the construction of mixedinteger programming (MIP) formulations of disjunctive constraints that are both strong and small. Motivated by lower bounds on the number of integer variables that are required by traditional MIP formulations, we present a more general mixed-integer branching formulation framework. Our approach maintains favorable algorithmic properties of traditional MIP formulations: in particular, amenability to branch-and-bound and branch-and-cut algorithms. Our main technical result gives an explicit linear inequality description for both traditional MIP and mixed-integer branching formulations for a wide range of disjunctive constraints. The formulations obtained from this description have linear programming relaxations that are as strong as possible and generalize some of the most computationally effective formulations for piecewise linear functions and other disjunctive constraints. We use this result to produce a strong mixed-integer branching formulation for any disjunctive constraint that uses only two integer variables and a linear number of extra constraints. We sharpen this result for univariate piecewise linear functions and annulus constraints arising in power systems and robotics, producing strong mixed-integer branching formulations that use only two integer variables and a constant (ď 6) number of general inequality constraints. Along the way, we produce two strong logarithmic-sized traditional MIP formulations for the annulus constraint using our main technical result, illustrating its broader utility in the traditional MIP setting.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.