2018
DOI: 10.48550/arxiv.1811.01988
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Strong mixed-integer programming formulations for trained neural networks

Abstract: We present strong mixed-integer programming (MIP) formulations for high-dimensional piecewise linear functions that correspond to trained neural networks. These formulations can be used for a number of important tasks, such as verifying that an image classification network is robust to adversarial inputs, or solving decision problems where the objective function is a machine learning model. We present a generic framework, which may be of independent interest, that provides a way to construct sharp or ideal for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(11 citation statements)
references
References 39 publications
0
11
0
Order By: Relevance
“…Any method that improves on any of the above issues can possibly bypass the barrier; see, e.g., SDPbased verifiers [Raghunathan et al, 2018b] can consider the interaction between each neuron within one layer; [Anderson et al, 2018] can relax the combination of one ReLU layer and one affine layer.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Any method that improves on any of the above issues can possibly bypass the barrier; see, e.g., SDPbased verifiers [Raghunathan et al, 2018b] can consider the interaction between each neuron within one layer; [Anderson et al, 2018] can relax the combination of one ReLU layer and one affine layer.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…We emphasize that by optimal, we mean the optimal convex relaxation of the single nonlinear constraint x (l+1) = σ (l) (z (l) ) (see Proposition (B.3)) instead of the optimal convex relaxation of the nonconvex feasible set of the original problem (O). As such, techniques as in [Anderson et al, 2018, Raghunathan et al, 2018b] are outside our framework; see appendix C for more discussions.…”
Section: Convex Relaxation From the Primal Viewmentioning
confidence: 99%
See 1 more Smart Citation
“…• Strong MILP formulations and a related family of cutting planes for ReLU networks were recently presented in [56]. These advancements could extend the applicability of ReLU networks as surrogate models by lowering solution times.…”
Section: Promising Research Directionsmentioning
confidence: 99%
“…There are, however, proposed uses of Mixed-Integer and Linear Programming technology in other aspects of Deep Learning. Some examples of this include feature visualization [Fischetti and Jo, 2018], generating adversarial examples [Cheng et al, 2017, Khalil et al, 2018, Fischetti and Jo, 2018, counting linear regions of a Deep Neural Network [Serra et al, 2017], performing inference [Amos et al, 2017] and providing strong convex relaxations for trained neural networks [Anderson et al, 2018].…”
Section: Related Workmentioning
confidence: 99%