2011
DOI: 10.1109/tevc.2010.2052054
|View full text |Cite
|
Sign up to set email alerts
|

Orthogonal Learning Particle Swarm Optimization

Abstract: Abstract-Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood's best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
145
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 650 publications
(181 citation statements)
references
References 51 publications
0
145
0
Order By: Relevance
“…In this section, comparisons of SGO versus OEA, HPSO-TVAC (Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients) [26], APSO (adaptive particle swarm optimization) [27], CLPSO (comprehensive learning particle swarm optimization) [28], OLPSO (orthogonal learning particle swarm optimization)-L [29] and OLPSO-G [29] on nine benchmarks listed in Appendix are carried out. OEA uses 3.0 × 10 5 FEs, HPSO-TVAC, CLPSO, APSO, OLPSO-L and OLPSO-G use 2.0×10 5 FEs, whereas SGO runs for 3 × 10 3 FEs for sphere, schwefel 1.2, schwefel 2.22 function, 1.0×10 2 FEs for step, 4.0×10 2 FEs for rastrigin, noncontinuous rastrigin and griwank, 1.0 × 10 3 FEs for Ackley and quartic function.…”
Section: Experiments 3: Sgo Vs Oea Hpso-tvac Clpso Apso Olpso-l Anmentioning
confidence: 99%
See 2 more Smart Citations
“…In this section, comparisons of SGO versus OEA, HPSO-TVAC (Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients) [26], APSO (adaptive particle swarm optimization) [27], CLPSO (comprehensive learning particle swarm optimization) [28], OLPSO (orthogonal learning particle swarm optimization)-L [29] and OLPSO-G [29] on nine benchmarks listed in Appendix are carried out. OEA uses 3.0 × 10 5 FEs, HPSO-TVAC, CLPSO, APSO, OLPSO-L and OLPSO-G use 2.0×10 5 FEs, whereas SGO runs for 3 × 10 3 FEs for sphere, schwefel 1.2, schwefel 2.22 function, 1.0×10 2 FEs for step, 4.0×10 2 FEs for rastrigin, noncontinuous rastrigin and griwank, 1.0 × 10 3 FEs for Ackley and quartic function.…”
Section: Experiments 3: Sgo Vs Oea Hpso-tvac Clpso Apso Olpso-l Anmentioning
confidence: 99%
“…OEA uses 3.0 × 10 5 FEs, HPSO-TVAC, CLPSO, APSO, OLPSO-L and OLPSO-G use 2.0×10 5 FEs, whereas SGO runs for 3 × 10 3 FEs for sphere, schwefel 1.2, schwefel 2.22 function, 1.0×10 2 FEs for step, 4.0×10 2 FEs for rastrigin, noncontinuous rastrigin and griwank, 1.0 × 10 3 FEs for Ackley and quartic function. The results of OEA, HPSO-TVAC, CLPSO and APSO are gained from [28] and [27] directly, and for OLPSO-L and OLPSO-G, results are gained from [29] directly and put in Table 3. In Table, "NA" stands for experiment is not conducted for that particular function.…”
Section: Experiments 3: Sgo Vs Oea Hpso-tvac Clpso Apso Olpso-l Anmentioning
confidence: 99%
See 1 more Smart Citation
“…As is known, both selective pressure and population diversity are significant for EAs [35][36][37][38]. However, in the traditional DE, a one-to-one competition mechanism is utilized as the selection operation between each pair of the target vector and its corresponding trial vector, which may induce weak selective pressure owing to its unbiased selection of parents or target vectors [26].…”
Section: Motivationsmentioning
confidence: 99%
“…However, a common problem often experienced when applying PSO to multimodal function optimization is that the particle population loses diversity too rapidly before it converges to some reasonable solutions 3,4,5,6,7 , which is commonly referred as premature convergence.…”
Section: Introductionmentioning
confidence: 99%