2020
DOI: 10.1109/tcyb.2018.2884083
|View full text |Cite
|
Sign up to set email alerts
|

A Many-Objective Particle Swarm Optimizer With Leaders Selected From Historical Solutions by Using Scalar Projections

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
26
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(29 citation statements)
references
References 71 publications
1
26
0
Order By: Relevance
“…Multiple swarms coevolved in a distributed fashion to maintain diversity for approximating the entire PFs, while a novel bottleneck objective learning strategy was used to accelerate convergence for all objectives. In MaPSO [46], a novel MOPSO based on the acute angle was proposed, in which the leader of particles was selected from its historical particles by using the scalar projections and each particle owned K historical particle information (K was set to 3 in its experiments). Moreover, the environmental selection in MaPSO was run based on the acute angle of each pair of particles.…”
Section: Some Current Mopsos and Moeas For Maopsmentioning
confidence: 99%
See 2 more Smart Citations
“…Multiple swarms coevolved in a distributed fashion to maintain diversity for approximating the entire PFs, while a novel bottleneck objective learning strategy was used to accelerate convergence for all objectives. In MaPSO [46], a novel MOPSO based on the acute angle was proposed, in which the leader of particles was selected from its historical particles by using the scalar projections and each particle owned K historical particle information (K was set to 3 in its experiments). Moreover, the environmental selection in MaPSO was run based on the acute angle of each pair of particles.…”
Section: Some Current Mopsos and Moeas For Maopsmentioning
confidence: 99%
“…(1) NMPSO [24]: this algorithm uses a balanceable fitness estimation to offer sufficient selection pressure in the search process, which considers both of the convergence and diversity with weight vectors. (2) MaPSO [46]: this algorithm uses an angle-based MOPSO called MaPSO. MaPSO chooses the leader positional information of particles from its historical particles by using scalar projections to guide particles.…”
Section: The Experimental Studiesmentioning
confidence: 99%
See 1 more Smart Citation
“…The third sub-section compares our proposed HGLSS with respect to other leader selection strategies under the framework of the MOPSO algorithm. The last sub-section compares the performance of MOPSO-HGLSS with respect to nine popular population-based metaheuristics (SMPSO [25], dMOPSO [29], MOPSOhv [30], MaPSO [31], MOEA/D [32] NSGA-III [33], DBEA [34], RVEA [35] and ARMOEA [36]), in terms of IGD + with different scalable MOPs (using 3, 5, 8 and 10 objectives). Section V presents our conclusions and some possible paths for future research.…”
Section: Some Moeas Have Recently Been Proposed To Handlementioning
confidence: 99%
“…The compared algorithms can be classified into two groups: MOPSOs and MOEAs. The group of MOPSOs consists of SMPSO [25], dMOPSO [29], MOPSOhv [30], MaPSO [31] and the proposed algorithm, while the other group consists of MOEA/D [32], NSGA-III [33], DBEA [34], RVEA [35] and ARMOEA [36]. In [25], the authors found that the speed of the particles in MOPSO was sometimes too high, making the particles move directly towards the boundaries.…”
Section: Performance Comparisons Of Hglss-mopso With Other Algorithmsmentioning
confidence: 99%