2013
DOI: 10.1016/j.jpdc.2013.07.023
|View full text |Cite
|
Sign up to set email alerts
|

Combining multi-core and GPU computing for solving combinatorial optimization problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
53
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(53 citation statements)
references
References 9 publications
0
53
0
Order By: Relevance
“…In [13], a Master-Slave approach is adopted and only an experimental scale of 2 GPUs is reported, which is clearly not sufficient to express the power of modern compute platforms (sub-optimal speed-up are actually reported). In [17], the authors experienced multi-core pool-based B&B with parallel bounding inside the GPUs devices. Only shared memory threads are studied and small parallel scales (up to 6 threads) are considered there-in.…”
Section: Parallel Bandb With Gpusmentioning
confidence: 99%
See 1 more Smart Citation
“…In [13], a Master-Slave approach is adopted and only an experimental scale of 2 GPUs is reported, which is clearly not sufficient to express the power of modern compute platforms (sub-optimal speed-up are actually reported). In [17], the authors experienced multi-core pool-based B&B with parallel bounding inside the GPUs devices. Only shared memory threads are studied and small parallel scales (up to 6 threads) are considered there-in.…”
Section: Parallel Bandb With Gpusmentioning
confidence: 99%
“…Only shared memory threads are studied and small parallel scales (up to 6 threads) are considered there-in. In [14,15], the master-slave approach of [42] is combined with a GPU-guided implementations similar to [17] in an attempt to tackle large scale environments. The focus there-in is rather on feasibility and implementations issues; but unfortunately, scalability and performance optimality were not addressed in a comprehensive manner.…”
Section: Parallel Bandb With Gpusmentioning
confidence: 99%
“…Researchers have proven that revisiting classic algorithms and adding parallelism to them is a very effective strategy for solving COPs. Such is the work in [32] where the authors revisited the design and implementation of Branch-and-Bound algorithms for solving large COPs on GPU-enhanced multicore machines, and in [33] a high-performance GPU implementation of the 2-opt and 3-opt local search algorithm was presented. Other metaheuristics that were enhanced using GPU are Ant Colony Optimization [34,35] and Genetic Algorithms [36,37].…”
Section: Related Workmentioning
confidence: 99%
“…Many parallel models for metaheuristics have been proposed to solve MOCO problems efficiently, and they have been evaluated on a wide range of academic and real-world MOCO problems in different domains [7,39]. Furthermore, parallelization of exact combinatorial-optimization methods, such as Branch and Bound [28] and Dynamic Programming [4], has been studied and implemented in multi-core environments [9]; however, it has been rarely addressed in the context of multi-objective optimization [39]. The only work we are aware is from Dhaenens et al [13], who parallelized the exact solving of MOCO problems by geometrically splitting the search space into cubes and evaluated their algorithm on one case study; but their parallelization is not able to scale well up to only 10 processors.…”
Section: Related Workmentioning
confidence: 99%