2015
DOI: 10.1007/s10601-015-9234-6
|View full text |Cite
|
Sign up to set email alerts
|

A lagrangian propagator for artificial neural networks in constraint programming

Abstract: This paper discusses a new method to perform propagation over a (two-layer, feed-forward) Neural Network embedded in a Constraint Programming model. The method is meant to be employed in Empirical Model Learning, a technique designed to enable optimal decision making over systems that cannot be modeled via conventional declarative means. The key step in Empirical Model Learning is to embed a Machine Learning model into a combinatorial model. It has been showed that Neural Networks can be embedded in a Constrai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 33 publications
0
3
0
Order By: Relevance
“…Beyond MIP and convex relaxations, a number of authors have investigated other algorithmic techniques for modeling trained neural networks in optimization problems, drawing primarily from the satisfiability, constraint programming, and global optimization communities [7,8,33,37,45]. Another intriguing direction studies restrictions to the space of models that may make the optimization problem over the network inputs simpler: for example, the classes of binarized [34] or input convex [1] neural networks.…”
Section: Relevant Prior Workmentioning
confidence: 99%
“…Beyond MIP and convex relaxations, a number of authors have investigated other algorithmic techniques for modeling trained neural networks in optimization problems, drawing primarily from the satisfiability, constraint programming, and global optimization communities [7,8,33,37,45]. Another intriguing direction studies restrictions to the space of models that may make the optimization problem over the network inputs simpler: for example, the classes of binarized [34] or input convex [1] neural networks.…”
Section: Relevant Prior Workmentioning
confidence: 99%
“…In this paper, we have focused on the important problem of improving the efficiency of B&B solvers for optimal planning with learned NN transition models in continuous action and state spaces. Parallel to this work, planning and decision making in discrete action and state spaces [12,17,16], verification of learned NNs [9,6,7,14], robustness evaluation of learned NNs [20] and defenses to adversarial attacks for learned NNs [10] have been studied with the focus of solving very similar decision making problems. For example, the verification problem solved by Reluplex [9] 9 is very similar to the planning problem solved by HD-MILP-Plan [18] without the objective function and horizon H = 1.…”
Section: Related Workmentioning
confidence: 99%
“…Beyond MIP and convex relaxations, a number of authors have investigated other algorithmic techniques for modeling trained neural networks in optimization problems, drawing primarily from the satisfiability, constraint programming, and global optimization communities [8,9,41,48,59]. Another intriguing direction studies restrictions to the space of models that may make the optimization problem over the network inputs simpler: for example, the classes of binarized [42] or input convex [2] neural networks.…”
Section: Relevant Prior Workmentioning
confidence: 99%