2020
DOI: 10.1109/access.2020.3010543
|View full text |Cite
|
Sign up to set email alerts
|

A Molecular Interactions-Based Social Learning Particle Swarm Optimization Algorithm

Abstract: Social learning particle swarm optimization (SL-PSO) allows individuals to learn from others to improve the scalability with easy parameter settings. However, it still suffers from the poor convergence for those multi-modal problems due to the loss of swarm diversity. To improve both the diversity and the convergence, this paper proposes a novel algorithm to apply the mechanism of molecular interactions to SL-PSO, in which the molecular attraction aims to improve the convergence, and the molecular repulsion in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 51 publications
0
1
0
Order By: Relevance
“…Xia et al [ 14 ] discussed GPSO with tabu detection and local search in a shrunk space; each dimension d is segmented into 7 regions of equivalent sizes; for every 5 consecutive generations, the variant calculates the excellence level of each region on dimension d based on the ranking of all the particles' personal fitness values and the distribution of all the particles' personal best positions in the regions; according to the excellence level of the region that the global best position belongs to, the variant appropriately randomly generates a possible replacement from some other region to assist escaping from a local optimum; when the global best position falls in a region on dimension d for 80 consecutive generations, the variant shrinks the dimensional search space to that specific region for the purpose of speeding up convergence; moreover, the variant conducts local search with the aid of differential evolution. Other recent works related to integrating GPSO/LPSO with multistrategy and/or adaptivity include [ 15 32 ].…”
Section: Related Workmentioning
confidence: 99%
“…Xia et al [ 14 ] discussed GPSO with tabu detection and local search in a shrunk space; each dimension d is segmented into 7 regions of equivalent sizes; for every 5 consecutive generations, the variant calculates the excellence level of each region on dimension d based on the ranking of all the particles' personal fitness values and the distribution of all the particles' personal best positions in the regions; according to the excellence level of the region that the global best position belongs to, the variant appropriately randomly generates a possible replacement from some other region to assist escaping from a local optimum; when the global best position falls in a region on dimension d for 80 consecutive generations, the variant shrinks the dimensional search space to that specific region for the purpose of speeding up convergence; moreover, the variant conducts local search with the aid of differential evolution. Other recent works related to integrating GPSO/LPSO with multistrategy and/or adaptivity include [ 15 32 ].…”
Section: Related Workmentioning
confidence: 99%