2019
DOI: 10.1109/tevc.2018.2874465
|View full text |Cite
|
Sign up to set email alerts
|

A Many-Objective Evolutionary Algorithm With Two Interacting Processes: Cascade Clustering and Reference Point Incremental Learning

Abstract: Researches have shown difficulties in obtaining proximity while maintaining diversity for many-objective optimization problems. Complexities of the true Pareto front pose challenges for the reference vector-based algorithms for their insufficient adaptability to the diverse characteristics with no priors. This paper proposes a many-objective optimization algorithm with two interacting processes: cascade clustering and reference point incremental learning (CLIA). In the population selection process based on cas… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 70 publications
(24 citation statements)
references
References 56 publications
0
24
0
Order By: Relevance
“…In [8], the algorithm generates a set of reference vectors to decompose the original MOPs into single-objective sub-problems and to elucidate user preferences to target a preferred subset of the whole PF(pareto fronts). In [9], the algorithm uses the reference vectors provided by the process based on incremental learning to select the potential solutions. One of the limitations is how to generate a set of uniformly distributed reference points or vectors.…”
Section: In Addition Moeas Can Easily Solve Complex Optimizationmentioning
confidence: 99%
“…In [8], the algorithm generates a set of reference vectors to decompose the original MOPs into single-objective sub-problems and to elucidate user preferences to target a preferred subset of the whole PF(pareto fronts). In [9], the algorithm uses the reference vectors provided by the process based on incremental learning to select the potential solutions. One of the limitations is how to generate a set of uniformly distributed reference points or vectors.…”
Section: In Addition Moeas Can Easily Solve Complex Optimizationmentioning
confidence: 99%
“…Associating K reference points with K solutions has a complexity Algorithm 2: Update reference set Input: population (P ), archive (A), last updated reference set (R), population size (N )); Output: Output R, B and P ; 1 Determine the number reference of points to add: K = min ( √ N , |A|); // Add K reference points 2 while |R| − N < K do 3 Compute the furthest member i of A to P by the max-min distance [32]; 4 Add i to P ; 5 Add into R the projection of i on the reference plane; // Remove K reference points 6 Calculate reference score ζ(r i ) using Eq. (4) for all r i ∈ R; 7 while |R| > N and there exists a reference score larger than 0 do // pickî (randomly if a tie exists) 8 Identify the reference with the largest score:î = argmax 1≤i≤|R| ζ(r i ); 9 Remove the reference point rî from R and the solution associating with it from P ;…”
Section: Reference Set Selectionmentioning
confidence: 99%
“…Wu et al introduced a Gaussian process regression model to aid reference sampling [37]. Similarly, an incremental learning model was employed for reference adaptation [9]. It should be noted that, however, the use of models for reference setting possibly induces additional computational costs, as there is no free lunch for improvements [36].…”
Section: Introductionmentioning
confidence: 99%
“…To address the above challenges, it is desirable to learn the location of reference vectors based on the knowledge acquired in the whole search history instead of based on the solutions in the current generation only. In [27], the active and inactive reference vectors after cascade clustering selection are used to train a classifier using incremental support vector machine to estimate the regions where the reference vectors should be located. In [28], self-organizing map (SOM) is adopted to learn the distribution of weight vectors using the neighbour information in the locations of the obtained solutions.…”
Section: Introductionmentioning
confidence: 99%