2019
DOI: 10.1080/00207721.2019.1645914
|View full text |Cite
|
Sign up to set email alerts
|

A new multi-objective particle swarm optimisation algorithm based on R2 indicator selection mechanism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 27 publications
0
10
0
Order By: Relevance
“…Wei et al [99] proposed a many-objective particle swarm optimizer based on the R2 indicator to achieve better convergence, which maximizes the internal density of a population. This method employs a bi-level archive-maintaining strategy based on the R2 indicator and objective space decomposition to maintain well-distributed solutions, which maximizes the dynamic similarity of a population.…”
Section: ) R2 Indicator-based Methodsmentioning
confidence: 99%
“…Wei et al [99] proposed a many-objective particle swarm optimizer based on the R2 indicator to achieve better convergence, which maximizes the internal density of a population. This method employs a bi-level archive-maintaining strategy based on the R2 indicator and objective space decomposition to maintain well-distributed solutions, which maximizes the dynamic similarity of a population.…”
Section: ) R2 Indicator-based Methodsmentioning
confidence: 99%
“…The number of function evaluations (NFE) is chosen as the termination criterion. NFEs are defined based on recent articles [35]- [38]. Regarding the two-objective test problems of ZDT and CEC09 (UF1-UF7), the maximum NFE is selected at 10k and 60k, respectively.…”
Section: Igd(s Zmentioning
confidence: 99%
“…The parameter setting for ESMOPSO algorithm are used in the process of polynomial mutation including probability of crossover p c = 0.5 and probability of mutation p m = 1/V . During the process of particle update, the inertia weight is adaptively adjusted by cosine function [37], the social learning factor is c 1 = 1.2, the individual learning factor is c 2 = 1. The performance of the algorithm can be intuitively shown in the form of data and graphs.…”
Section: Experimental Settingsmentioning
confidence: 99%