2015
DOI: 10.1016/j.ins.2015.06.040
|View full text |Cite
|
Sign up to set email alerts
|

On the design of shared memory approaches to parallelize a multiobjective bee-inspired proposal for phylogenetic reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 42 publications
(91 reference statements)
0
6
0
Order By: Relevance
“…9 papers out of the 87 made use other algorithms for continuous optimization that were not PSO. The Artificial Bee Colony algorithm was found in 8 works ( [20][21][22][23][24][25]) while the Fish School Search algorithm was used in [26]. Tables 5 and 6 display the information about these papers.…”
Section: Other Algorithmsmentioning
confidence: 99%
“…9 papers out of the 87 made use other algorithms for continuous optimization that were not PSO. The Artificial Bee Colony algorithm was found in 8 works ( [20][21][22][23][24][25]) while the Fish School Search algorithm was used in [26]. Tables 5 and 6 display the information about these papers.…”
Section: Other Algorithmsmentioning
confidence: 99%
“…The application of multi-objective optimization in phylogeny represents a hopeful solution to deal with main source of inconsistency that may affect the reliability of phylogenetic reasoning. According to [32], the study of phylogenetic reconstruction can be divided into two aspects. On the one hand, a series of multi-objective evolutionary algorithms have been successfully proposed to solve conflict information in different data sets.…”
Section: B Multi-objective Evolutionary Algorithmmentioning
confidence: 99%
“…In shared memory machines, all processors have access to shared memory, and the communications between them is done by reading and writing on the shared memory. In distributed memory machines, each processor has its own local memory, and communication between processors is done by the interchange of messages through networks [9]. There are two general types of parallelization techniques, namely data parallelization and task parallelization.…”
Section: Introductionmentioning
confidence: 99%