2021
DOI: 10.1007/978-3-030-79553-5_9
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic Multimodal Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 64 publications
0
0
0
Order By: Relevance
“…r 1 and r 2 are uniformly and randomly sampled within [0, 1]. Equation (2) shows that in the canonical PSO, all particles learn from the global best position gbest of the whole swarm found so far. Such an exemplar is too greedy and thus easily leads to falling into local basins of the swarm when coping with optimization problems with many local regions [49,74,75].…”
Section: Canonical Psomentioning
confidence: 99%
See 1 more Smart Citation
“…r 1 and r 2 are uniformly and randomly sampled within [0, 1]. Equation (2) shows that in the canonical PSO, all particles learn from the global best position gbest of the whole swarm found so far. Such an exemplar is too greedy and thus easily leads to falling into local basins of the swarm when coping with optimization problems with many local regions [49,74,75].…”
Section: Canonical Psomentioning
confidence: 99%
“…Optimization problems emerge commonly and become more and more complicated in many research fields and industrial engineering [1,2], such as object detection and tracking [3,4], automatic design of algorithms for visual attention [5,6], path planning optimization [7,8], and control of pollutant spreading on social networks [9]. In particular, these complicated optimization problems usually are non-differentiable, discontinuous, non-convex, non-linear, or multimodal [10][11][12].…”
Section: Introductionmentioning
confidence: 99%