2020
DOI: 10.1016/j.future.2020.03.045
|View full text |Cite
|
Sign up to set email alerts
|

Desynchronization in distributed Ant Colony Optimization in HPC environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 19 publications
0
12
0
Order By: Relevance
“…As we have shown in [18], ACO can reach good scalability for up to hundreds of nodes in an HPC environment. The proposed architecture keeps the pheromone matrix parts distributed across all nodes and uses desynchronized updates in order to achieve good scalability without a noticeable deterioration of the result quality.…”
Section: Scalability In Computationsmentioning
confidence: 78%
See 3 more Smart Citations
“…As we have shown in [18], ACO can reach good scalability for up to hundreds of nodes in an HPC environment. The proposed architecture keeps the pheromone matrix parts distributed across all nodes and uses desynchronized updates in order to achieve good scalability without a noticeable deterioration of the result quality.…”
Section: Scalability In Computationsmentioning
confidence: 78%
“…The impact of desynchronization on the results is negligible and might be considered as another randomization factor. On the other hand, the speedup of the algorithm allows it to compute many more possible solutions of a problem, which leads to increased exploration and improved final results [18]. The speedup also enables faster optimization thanks to extensive parallelization.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Metaheuristics can be parallelized in a number of ways [27,28], and there are works on parallelization of each of the basic metaheuristics considered (Local Search [29], Tabu Search [30], Genetic Algorithm [31], Ant Colony [32], ...), and for different types of computational systems (e.g., CMP architectures [33], distributed platforms [34] and GPUs [35,36]). On the other hand, the unified, parameterized schema enables the simultaneous implementation of parallel versions of different basic metaheuristics and their hybridations for different types of computational systems (e.g., shared-memory [11], heterogeneous clusters [12] and many-core systems like GPUs [37]).…”
Section: Hybrid Metaheuristics On Heterogenous Multicore+multigpumentioning
confidence: 99%