Competence in High Performance Computing 2010 2011
DOI: 10.1007/978-3-642-24025-6_12
|View full text |Cite
|
Sign up to set email alerts
|

ParaSCIP: A Parallel Extension of SCIP

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
30
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 36 publications
(30 citation statements)
references
References 7 publications
0
30
0
Order By: Relevance
“…Future work involves to apply our implementation to data sets with larger p and/or n. A possible choice to accomplish this involves the use of parallel computation via ParaSCIP and FiberSCIP [15]. Secondly, various non-AIC information criterion, e.g.…”
Section: Resultsmentioning
confidence: 99%
“…Future work involves to apply our implementation to data sets with larger p and/or n. A possible choice to accomplish this involves the use of parallel computation via ParaSCIP and FiberSCIP [15]. Secondly, various non-AIC information criterion, e.g.…”
Section: Resultsmentioning
confidence: 99%
“…To get a feel for the answer to that question, we performed preliminary experiments with ParaSCIP (Shinano et al 2012) employing SCIP (Achterberg 2009) as an ILP solver and using CPLEX 12 4 to solve the LP relaxations. ParaSCIP consists of a supervisor (load coordinator) system capable of maintaining the trunk of a B&B tree and distributing the solution of an ILP over a large number of PEs by use of the MPI communication protocol.…”
Section: Solution On Many Pes (Distributed Memory)mentioning
confidence: 99%
“…We primarily address algorithms that derive bounds on subproblems by solving relaxations of the original ILP that are (continuous) linear optimization problems (LPs) obtained by dropping the integrality requirements on the variables. These relaxations are typically augmented by dynamically generated valid inequalities (see, e.g., Wolsey 1998; Achterberg 2009 for details regarding general ILP solving and Xu et al 2009;Shinano et al 2012;Applegate et al 2007;Phillips et al 2006 for distributed memory solution techniques).…”
mentioning
confidence: 99%
“…Erraticism is also exploited in the context of massive parallel computing, which nowadays is becoming widely available due to multi-core technology and computer grids, allowing for the development of effective parallel MIP solvers [19,18,17]. Taking full advantage of the new architecture is far from trivial, in particular because the branching nodes produced in the earliest "ramp up" phase of the enumeration cannot be distributed in a balanced way among the processors.…”
Section: Introductionmentioning
confidence: 99%
“…Taking full advantage of the new architecture is far from trivial, in particular because the branching nodes produced in the earliest "ramp up" phase of the enumeration cannot be distributed in a balanced way among the processors. Racing ramp-up is a technique proposed in [17,16] with the aim of avoiding idle processors: a same MIP solver is initially run with different settings, in parallel, until a stopping criterion is reached. Then it is decided which of the generated trees performed best according to some criterion (not described in full details).…”
Section: Introductionmentioning
confidence: 99%