2019
DOI: 10.1109/access.2019.2941086
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of High Performance Parallel Implementations of TLBO and Jaya Optimization Methods on Manycore GPU

Abstract: The utilization of optimization algorithms within engineering problems has had a major rise in recent years, which has led to the proliferation of a large number of new algorithms to solve optimization problems. In addition, the emergence of new parallelization techniques applicable to these algorithms to improve their convergence time has made it a subject of study by many authors. Recently, two optimization algorithms have been developed: Teaching-Learning Based Optimization and Jaya. One of the main advanta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 34 publications
0
7
0
Order By: Relevance
“…In the learner phase, each individual strives to learn via contact with other individuals within the population. The following is the TLBO algorithm's behaviour [48]. Random data is used to build the population and initialize the individuals (values of the design variables).…”
Section: B Teaching Learning Based Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…In the learner phase, each individual strives to learn via contact with other individuals within the population. The following is the TLBO algorithm's behaviour [48]. Random data is used to build the population and initialize the individuals (values of the design variables).…”
Section: B Teaching Learning Based Optimizationmentioning
confidence: 99%
“…These numbers are employed in the algorithm's two primary stages: the teacher and learner phases. In the teacher phase, the current generation has acquired teacher, X best is utilized to build a new version of each person X new using the equation [48]:…”
Section: B Teaching Learning Based Optimizationmentioning
confidence: 99%
“…Similar works in [36,37] used GPU and FPGA to accelerate genetic algorithm (GA). What deserves attention is that Garcia et al [38] achieved parallel implementation and comparison of teaching-learning based optimization (TLBO) and Jaya on many-core GPU. As for WOA, Khalil et al [39] proposed a simple and robust distributed WOA using Hadoop Map-Reduce, reaching a promising speedup.…”
Section: Related Workmentioning
confidence: 99%
“…Authors used unconstrained benchmark functions to test the proposed approach. Authors also analysed the utilization of GPUs by each approach [48]. García-Monzó et al developed a shared memory-based and message-passing based parallel TLBO algorithm.…”
Section: Related Workmentioning
confidence: 99%