2017
DOI: 10.1016/j.ins.2016.12.043
|View full text |Cite
|
Sign up to set email alerts
|

Particle swarm optimization using multi-level adaptation and purposeful detection operators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(12 citation statements)
references
References 27 publications
0
11
0
Order By: Relevance
“…In the past two decades, a number of variants to the original have been introduced attempting to further improve its performance. Typical variants can be summarized in the following four types: (1) neighborhood topology [29], [30]; (2) parameter control [31], [32]; (3) hybrid methods [33], [34] and (4) novel learning schemes [35], [36]. Since genetic algorithms (GAs) have good exploration ability, genetic learning PSO (GLPSO) has been proposed [37] to strength the performance of PSO by generating high-quality exemplars to guide the evolution of the particles [38].…”
Section: B Psomentioning
confidence: 99%
“…In the past two decades, a number of variants to the original have been introduced attempting to further improve its performance. Typical variants can be summarized in the following four types: (1) neighborhood topology [29], [30]; (2) parameter control [31], [32]; (3) hybrid methods [33], [34] and (4) novel learning schemes [35], [36]. Since genetic algorithms (GAs) have good exploration ability, genetic learning PSO (GLPSO) has been proposed [37] to strength the performance of PSO by generating high-quality exemplars to guide the evolution of the particles [38].…”
Section: B Psomentioning
confidence: 99%
“…The experimental outcomes produced on URLPs i.e. URLP1-URLP4 by ePSO are given in Table 2-5 respectively and it is also compared with 13 others classical optimization algorithm namely DMSPSO [41], F-PSO [42], OLPSO [43], SLPSO [44], PSODDS [45], SL-PSO [46], HCLPSO [47], SSS-APSO [48], SopPSO [49], JADE [50], SaDE [51], CoDE [52] and CMA-ES [53]. The Tables 2 to 5 comprise best and mean values more than 30 independent runs with ranking i.e.…”
Section: B On Four Unconstrained Real Life Problemsmentioning
confidence: 99%
“…Ghasemi et al [26] proposed Gaussian barebones TLBO (GBTLBO) by using Gaussian sampling technology, and the GBTLBO was applied to the optimal reactive power dispatch problem. (3) Initialize learners and evaluate them; (4) while stopping condition is not met (5) Choose the best learner as x teacher ; (6) Calculate the mean x mean of all learners; (7) for each learner x (8) // Teacher phase // (9) T F = round(1 + rand(0, 1)); (10) Update the learner according to Eq. (1); (11) Evaluate the new learner x ,new ; (12) Accept x ,new if it is better than the old one x ,old (13) // Learner phase // (14) Randomly select another learner x which is different from x ; (15) Update the learner according to Eq.…”
Section: Improvements On Tlbomentioning
confidence: 99%
“…Generate a random in [0, 1]; (8) if ≤ LE 9Update learner x according to Eq. (5); (10) Evaluate the new learner x ,new ; (11) Accept the new learner x ,new if it is better than the old one x ,old ; (12) end if (13) end for (14) end Algorithm 2: Learning enthusiasm based teacher phase.…”
Section: Learning Enthusiasm Based Learner Phasementioning
confidence: 99%
See 1 more Smart Citation