2009
DOI: 10.1007/978-3-642-01970-8_102
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Calculating of the Goal Function in Metaheuristics Using GPU

Abstract: Abstract. We consider a metaheuristic optimization algorithm which uses single process (thread) to guide the search through the solution space. Thread performs in the cyclic way (iteratively) two main tasks: the goal function evaluation for a single solution or a set of solutions and management (solution filtering and selection, collection of history, updating). The latter task takes statistically 1-3% total iteration time, therefore we skip its acceleration as useless. The former task can be accelerated in pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 7 publications
(5 reference statements)
0
3
0
Order By: Relevance
“…Proposals of GPU implementations of other metaheuristics have also been recently presented, such as the fine-grain parallel fitness evaluation in [8], and parallel implementations of different EAs as Ant Colony Optimization (ACO) [5], PSO [11], or DE [26,20]. A more detailed review of parallel algorithms is included in the following work [2].…”
Section: Parallel Eas Using Gpumentioning
confidence: 99%
“…Proposals of GPU implementations of other metaheuristics have also been recently presented, such as the fine-grain parallel fitness evaluation in [8], and parallel implementations of different EAs as Ant Colony Optimization (ACO) [5], PSO [11], or DE [26,20]. A more detailed review of parallel algorithms is included in the following work [2].…”
Section: Parallel Eas Using Gpumentioning
confidence: 99%
“…Equation ( 6) is obvious. Equation (7) guarantees that operations will be processed for the required amount of time without interruption. Inequality (8) ensures an operation cannot start before its machine predecessor completes, while also guaranteeing that no more than one operation is processed by a given machine at any time.…”
Section: Problem Formulationmentioning
confidence: 99%
“…Thus the synchronization and communication between parallel threads occurs more often. The fine-grained parallelization for JSSP was considered in the past, mainly in papers by Bożejko et al [7,9], where the authors considered parallel computation of the cost function through GPGPU devices. A mixed fine-coarse-grained approach was also shown in [8], where a framework for running a TS method on a cluster of GPGPU devices was proposed.…”
Section: Parallelization Challengesmentioning
confidence: 99%
“…EAs have been the preferred metaheuristic to parallelize on GPU, including the fine‐grain master–slave model that implements the parallel fitness evaluation for EAs (Li et al., ; Tsutsui, ; Wong and Wong, ; Yu et al., ), ES (Zhu, ) and hybrid EAs (Man‐Leung and Tien‐Tsin, ; Munawar et al., ), the cellular model (Luo and Liu, ; Vidal and Alba, ), and the island based model (Luong et al., ; Maitre et al., ; Pospíchal et al., , ; Risco‐Martin et al., ). Proposals of GPU implementations for other metaheuristics have also been recently presented, such as the fine‐grain parallel fitness evaluation in single‐thread methods (Bozejko et al., ), the parallel independent runs of ACO (Bai et al., ), the master–slave parallel ACO (Catalá et al., ; Fu et al., ; Zhu and Curry, ), the fine‐grained parallel immune algorithms (Li et al., ; Zhao et al., ), and the two‐level approach for parallel metaheuristics (Bozejko et al., ).…”
Section: Technologies For Parallel Metaheuristicsmentioning
confidence: 99%