2004
DOI: 10.1007/978-3-540-24653-4_52
|View full text |Cite
|
Sign up to set email alerts
|

A Hierarchical Particle Swarm Optimizer for Dynamic Optimization Problems

Abstract: Abstract. Particle Swarm Optimization (PSO) methods for dynamic function optimization are studied in this paper. We compare dynamic variants of standard PSO and Hierarchical PSO (H-PSO) on different dynamic benchmark functions. Moreover, a new type of hierarchical PSO, called Partitioned H-PSO (PH-PSO), is proposed. In this algorithm the hierarchy is partitioned into several sub-swarms for a limited number of generations after a change occurred. Different methods for determining the time when to rejoin the hie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
52
0

Year Published

2005
2005
2018
2018

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 73 publications
(52 citation statements)
references
References 9 publications
0
52
0
Order By: Relevance
“…Janson and Middendorf [162] proposed a partitioned hierarchical PSO (PH-PSO) that uses a dynamic tree-based neighbourhood structure. Each particle is placed on a single node of the tree and particles are arranged hierarchically according to their fitness.…”
Section: Maintaining Diversity During Executionmentioning
confidence: 99%
“…Janson and Middendorf [162] proposed a partitioned hierarchical PSO (PH-PSO) that uses a dynamic tree-based neighbourhood structure. Each particle is placed on a single node of the tree and particles are arranged hierarchically according to their fitness.…”
Section: Maintaining Diversity During Executionmentioning
confidence: 99%
“…Most of the particle swarm algorithms present in the literature deal only with continuous variables [1,5,10]. This is a significant limitation because many optimization problems are set in a space featuring discrete variables.…”
Section: Introductionmentioning
confidence: 99%
“…This gives a dynamic neighbourhood that does require extensive calculation. This has been adapted to dynamic problems by Janson and Middendorf [23,24]. After the value of the best-known position (gbest) changes (it is reevaluated every cycle) a few sub-swarms are reinitialised while the rest are reset (have their old personal best information erased and replaced with the current position).…”
Section: Forcing Explorer Particlesmentioning
confidence: 99%
“…For dynamic problems this tabu list can be completely removed on the grounds that any particular point in problem space may be a good optimum at several disjoined times. Alternatively extending the idea from Janson and Middendorf [24], each of these previously explored optima could be periodically re-examined and only those points whose fitness had significantly changed are removed from the tabu lists. It is not clear at this stage how the evolutionary pressure that is an important part of WoSP would respond to a dynamic problem.…”
Section: Forcing Explorer Particlesmentioning
confidence: 99%