2013
DOI: 10.1007/978-3-642-40627-0_45
|View full text |Cite
|
Sign up to set email alerts
|

Embarrassingly Parallel Search

Abstract: Abstract. We propose the Embarrassingly Parallel Search, a simple and efficient method for solving constraint programming problems in parallel. We split the initial problem into a huge number of independent subproblems and solve them with available workers (i.e., cores of machines). The decomposition into subproblems is computed by selecting a subset of variables and by enumerating the combinations of values of these variables that are not detected inconsistent by the propagation mechanism of a CP Solver. The … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
55
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 57 publications
(59 citation statements)
references
References 12 publications
0
55
0
Order By: Relevance
“…problem up into many more pieces than we have cores, so that an even balance is obtained automatically. A similar approach in a constraint programming setting has been used by Régin et al (2013), with very favourable results on a range of standard constraint programming problems (but maximum clique was not considered).…”
Section: The Importance Of Good Work Splittingmentioning
confidence: 99%
“…problem up into many more pieces than we have cores, so that an even balance is obtained automatically. A similar approach in a constraint programming setting has been used by Régin et al (2013), with very favourable results on a range of standard constraint programming problems (but maximum clique was not considered).…”
Section: The Importance Of Good Work Splittingmentioning
confidence: 99%
“…In other words, load balancing is automatically obtained in a statistical sense. Interestingly, experiments of [21] have shown that the number of subproblems does not depend on the initial problem but rather on the number of workers. Moreover, they have shown that a good decomposition has to generate more than 30 subproblems per worker.…”
Section: Embarrassingly Parallel Searchmentioning
confidence: 99%
“…Our approach does not require to deal with a set of instances and use some sampling technique that are usually more accurate. It exploits the decomposition proposed by the embarrassingly parallel search (EPS) method recently developed [21,22]. EPS proposes to solve a problem by decomposing it into a large number of subproblems consistent with the propagation (i.e., there is no immediate failure triggered by the initial propagation of a subproblem).…”
Section: Introductionmentioning
confidence: 99%
“…Search-space splitting techniques such as domain decomposition could be interesting, but initial experiments [18] show that the speedup tends to flatten after a few tens of cores (e.g., speedup of 28 with 32 cores and 29 with 64 cores, for an all-solution search of the 17-queens problem). A recent approach based on a smaller granularity domain decomposition [63] shows better performance. The results for all-solution search on classical CSPLib benchmarks are quite encouraging and show an average speedup with 40 cores of 14 (Resp.…”
Section: Introductionmentioning
confidence: 99%