2020
DOI: 10.1155/2020/4968063
|View full text |Cite
|
Sign up to set email alerts
|

An Enhanced Comprehensive Learning Particle Swarm Optimizer with the Elite-Based Dominance Scheme

Abstract: In recent years, swarm-based stochastic optimizers have achieved remarkable results in tackling real-life problems in engineering and data science. When it comes to the particle swarm optimization (PSO), the comprehensive learning PSO (CLPSO) is a well-established evolutionary algorithm that introduces a comprehensive learning strategy (CLS), which effectively boosts the efficacy of the PSO. However, when the single modal function is processed, the convergence speed of the algorithm is too slow to converge qui… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 114 publications
0
13
0
Order By: Relevance
“…We initially present the comparison of proposed algorithm, EDCQPSO, with others through 30 classic benchmark functions in IEEE CEC2017 [46], as shown in Table 1. The performance of our algorithm on benchmark functions was verified.…”
Section: Experimental Setup and Performance Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…We initially present the comparison of proposed algorithm, EDCQPSO, with others through 30 classic benchmark functions in IEEE CEC2017 [46], as shown in Table 1. The performance of our algorithm on benchmark functions was verified.…”
Section: Experimental Setup and Performance Analysismentioning
confidence: 99%
“…The comparison of mean values and standard deviation after thirty iterations on thirty benchmark functions are listed. Table 4 shows that EDCQPSO ranks first, followed sequentially by GWO-GOA, GHO, GWO, IWO, EBFO, and DA, based on overall rank for CE01-CE30 functions of CEC2017 [46]. On three unimodal test functions (CE01-CE03), EDCQPSO performs better than other algorithms.…”
Section: A Comparisons Of the Edcqpso With Other Swarm Algorithmsmentioning
confidence: 99%
“…As far as swarm intelligence optimization algorithms are concerned, a number of related algorithms have been proposed, including grey wolf optimization (GWO) [ 55 ], moth-flame optimization (MFO) [ 56 ], PSO [ 57 ], sine cosine algorithm (SCA) [ 58 ], whale optimizer (WOA) [ 59 ], multi-verse optimizer (MVO) [ 60 ], Harris hawks optimization (HHO) 1 [ 61 ], slime mould algorithm (SMA) 2 [ 62 ], hunger games search (HGS) 3 [ 63 ], Runge Kutta optimizer (RUN) 4 [ 64 ], modified SCA (m_SCA) [ 65 ], boosted GWO (OBLGWO) [ 66 ], opposition-based SCA (OBSCA) [ 67 ], A-C parametric WOA (ACWOA) [ 68 ], biogeography-based learning PSO (BLPSO) [ 69 ], comprehensive learning PSO (CLPSO) [ 70 ], moth-flame optimizer with sine cosine mechanisms (SMFO) [ 71 ], enhanced comprehensive learning particle swarm optimizer (GCLPSO) [ 72 ], enhanced GWO with a new hierarchical structure (IGWO) [ 73 ], improved WOA (IWOA) [ 74 ], and ant colony optimization (ACO) for continuous domains (ACOR) [ 75 ]. Notably, it is well known that ACO [ 76 , 77 ] is an algorithm for solving discrete optimization problems, whereas ACOR can be used to solve optimization problems other than discrete ones.…”
Section: Introductionmentioning
confidence: 99%
“…However, we should notice that some of these methods' originality, such as GWO, BAT, and FA, is not high and criticized in several papers [44,85,86]. Meanwhile, there are many corresponding improvement algorithms, such as enhanced comprehensive learning particle swarm optimizer(GCLPSO) [87], random spare ant colony optimization (RCACO) [88], enhanced whale optimizer with associative learning (BMWOA) [89], enhanced GWO with a new hierarchical structure (IGWO) [48], hybridizing grey wolf optimization (HGWO) [90], boosted GWO (OBLGWO) [91] and ant colony optimizer with random spare strategy and chaotic intensification strategy (CCACO) [88], etc. MFO is a novel meta-heuristic algorithm for solving optimization problems.…”
Section: Introductionmentioning
confidence: 99%