2020
DOI: 10.1109/tcyb.2019.2943928
|View full text |Cite
|
Sign up to set email alerts
|

Triple Archives Particle Swarm Optimization

Abstract: There are two common challenges in particle swarm optimization (PSO) research, that is, selecting proper exemplars and designing an efficient learning model for a particle. In this article, we propose a triple archives PSO (TAPSO), in which particles in three archives are used to deal with the above two challenges. First, particles who have better fitness (i.e., elites) are recorded in one archive while other particles who offer faster progress, called profiteers in this article, are saved in another archive. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
65
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 164 publications
(65 citation statements)
references
References 58 publications
0
65
0
Order By: Relevance
“…• PSO has faster execution speed and higher efficiency of problem-solving. Based on this feature, [13] proposes a triple archives PSO (TAPSO) and [14] proposes an eXpanded PSO (XPSO). • For all particles in PSO, they can search solutions concurrently, which can be executed in parallel.…”
Section: Introductionmentioning
confidence: 99%
“…• PSO has faster execution speed and higher efficiency of problem-solving. Based on this feature, [13] proposes a triple archives PSO (TAPSO) and [14] proposes an eXpanded PSO (XPSO). • For all particles in PSO, they can search solutions concurrently, which can be executed in parallel.…”
Section: Introductionmentioning
confidence: 99%
“…Usually, a kind of swarm intelligence algorithm has many variants, which have different search strategies or parameters. Take the PSO algorithm as an example; for single-objective optimization, there are adaptive PSO algorithm [8], time-varying attractor in PSO algorithm [9], interswarm interactive learning strategy in PSO algorithm [10], triple archives PSO algorithm [11], social learning PSO algorithm for scalable optimization [12], PSO variant for mixed-variable optimization problems [13], etc. For multiobjective optimization, there are adaptive gradient multiobjective PSO algorithm [14], coevolutionary PSO algorithm with bottleneck objective learning strategy [15], normalized ranking based PSO algorithm for many-OB-JECTIVE optimization [16], etc.…”
Section: Introductionmentioning
confidence: 99%
“…Liu et al proposed a coevolutionary particle swarm optimization with a bottleneck objective learning (BOL) strategy for many-objective optimization (CPSO) [32]. Xia et al proposed a triple archives PSO (TAPSO) [33]. Although the convergence accuracy of these algorithms is higher compared with the basic PSO, their convergence and robustness are still unsatisfactory.…”
Section: Introductionmentioning
confidence: 99%