2020
DOI: 10.1016/j.knosys.2020.105789
|View full text |Cite
|
Sign up to set email alerts
|

Particle swarm optimization with adaptive learning strategy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 74 publications
(35 citation statements)
references
References 30 publications
0
35
0
Order By: Relevance
“…dedicated learning curves [37] and assignment of random weights according to the fitness scores [38], we employ a sinusoidal chaotic map to generate the weight factors for prioritizing the dominance of the best leader wolf α, as shown in (17). Then, the leadership factors of wolves β and δ are determined subsequently in accordance with that of wolf α, as indicated in (18). The position updating mechanism with the new dominance factors is presented in (19).…”
Section: ) Chaotic Dominance Of Wolf Leadersmentioning
confidence: 99%
See 1 more Smart Citation
“…dedicated learning curves [37] and assignment of random weights according to the fitness scores [38], we employ a sinusoidal chaotic map to generate the weight factors for prioritizing the dominance of the best leader wolf α, as shown in (17). Then, the leadership factors of wolves β and δ are determined subsequently in accordance with that of wolf α, as indicated in (18). The position updating mechanism with the new dominance factors is presented in (19).…”
Section: ) Chaotic Dominance Of Wolf Leadersmentioning
confidence: 99%
“…These advantageous characteristics endow GWO with enhanced exploration capability and search diversity, while maintaining its efficient computational cost. Comparatively, PSO is more likely to be trapped in local optima, owing to the dictation of the global best solution and the lack of diversification in its guiding signals over the entire iterative process [18,19]. While the GA is capable of attaining the global optimality, a larger number of function evaluations are normally required.…”
Section: Introductionmentioning
confidence: 99%
“…Despite having high convergence speed, BPSO tends to suffer with premature convergence because the particles are guided by both personal and global best positions [41,49,50]. One recent trend to overcome the shortcoming of BPSO is to suppress the potential negative influences brought by both personal and global best particles through the derivation of new exemplars from other non-fittest solutions [51].…”
Section: ) Modificaiton In Learning Strategymentioning
confidence: 99%
“…There are higher chance for the Pg particle to be trapped at local optima of complex search environment during the early stage of optimization and the remaining population members can be misled towards these inferior solution regions. In [39,49], it was advocated that the negative influences of Pg can be suppressed by leveraging useful information of other nonfittest particles to formulate the unique exemplar for each particle in order to adjust its search trajectory. The simulation results of [51] also revealed that the directional information carried by other non-fittest population members cannot be underestimated in addressing the deficiency of BPSO.…”
Section: A Derivation Of Global Exemplarmentioning
confidence: 99%
See 1 more Smart Citation