2015
DOI: 10.1007/s10766-015-0356-7
|View full text |Cite
|
Sign up to set email alerts
|

Mixing Static and Dynamic Partitioning to Parallelize a Constraint Programming Solver

Abstract: International audienceThis paper presents an external parallelization of Constraint Programming (CP) search tree mixing both static and dynamic partitioning. The principle of the parallelization is to partition the CP search tree into a set of sub-trees, then assign each sub-tree to one computing core in order to perform a local search using a sequential CP solver. In this context, static partitioning consists of decomposing the CP variables domains in order to split the CP search tree into a set of disjoint s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
2
1
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…Results are reported on a wide range of enumeration and optimisation problems, showing that the technique beats a multi-armed bandit portfolio. Menouer et al (2016) show that starting with a static decomposition, and then switching to dynamic work-stealing, yield better results than either technique on its own when parallelising the OR-Tools solver. Results using two twelve core machines are mixed, with speedups ranging from seven to ten.…”
Section: Heuristic-ignorant Decompositionsmentioning
confidence: 98%
“…Results are reported on a wide range of enumeration and optimisation problems, showing that the technique beats a multi-armed bandit portfolio. Menouer et al (2016) show that starting with a static decomposition, and then switching to dynamic work-stealing, yield better results than either technique on its own when parallelising the OR-Tools solver. Results using two twelve core machines are mixed, with speedups ranging from seven to ten.…”
Section: Heuristic-ignorant Decompositionsmentioning
confidence: 98%
“…share common paradigms, they are typically individually parallelised and there is little code reuse. One approach to minimise development effort is to parallelise existing sequential solvers [23]. Alternatively high level frameworks provide developers generic libraries to compose searches [1,28,8].…”
Section: The Challenges Of Exact Combinatorial Search At Hpc Scalementioning
confidence: 99%
“…Even when parallel searches share common paradigms, they are typically individually parallelised and there is little code reuse. One approach to minimise development effort is to parallelise existing sequential solvers [23]. Alternatively high level frameworks provide developers generic libraries to compose searches [1,8,28].…”
Section: Global Knowledge Exchangementioning
confidence: 99%