2006 IEEE International Conference on Evolutionary Computation
DOI: 10.1109/cec.2006.1688438
|View full text |Cite
|
Sign up to set email alerts
|

Comparison between Single-Objective and Multi-Objective Genetic Algorithms: Performance Comparison and Performance Measures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
20
0
1

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 50 publications
(24 citation statements)
references
References 13 publications
2
20
0
1
Order By: Relevance
“…The works in [13] shows that common MOEA measures such as hypervolume [31] are not necessarily suitable for comparing solutions by MOEAs (our BLiM approach) with solutions by SOEAs (baseline in this work). Therefore, in order to compare the baseline approach with BLiM, we first take the best solution of the baseline approach for its single-objective (the similarity with the bug description), and then we take the best solution of BLiM with regard to the objective of the baseline approach (the similarity with the bug description), as described in [13]. Finally, these solutions are compared to the bug realization of the oracle in order to get a confusion matrix.…”
Section: Methodsmentioning
confidence: 96%
“…The works in [13] shows that common MOEA measures such as hypervolume [31] are not necessarily suitable for comparing solutions by MOEAs (our BLiM approach) with solutions by SOEAs (baseline in this work). Therefore, in order to compare the baseline approach with BLiM, we first take the best solution of the baseline approach for its single-objective (the similarity with the bug description), and then we take the best solution of BLiM with regard to the objective of the baseline approach (the similarity with the bug description), as described in [13]. Finally, these solutions are compared to the bug realization of the oracle in order to get a confusion matrix.…”
Section: Methodsmentioning
confidence: 96%
“…We denote this test problem as the 2-500 problem. The two objectives f 1 (x) and f 2 (x) of the 2-500 problem were generated by randomly assigning an integer in the closed interval [10,100] to each item as its profit (see [34]). In the same manner, we generated other two objectives f 3 (x) and f 4 (x).…”
Section: Computational Experiments On Knapsack Problemsmentioning
confidence: 99%
“…Pareto dominance is used for fitness evaluation in almost all well-known and frequently-used EMO algorithms such as NSGA-II [4], SPEA [34] and SPEA2 [33]. Whereas Pareto dominance-based EMO algorithms usually work very well on multi-objective problems with two or three objectives, they often show difficulties in the handling of many-objective problems with four or more objectives as pointed out in several studies [7], [10], [16], [23], [24], [35]. This is because almost all individuals in the current population are non-dominated with each other when they are compared using many objectives.…”
Section: Introductionmentioning
confidence: 99%
“…Although an approximation of the Pareto front may, in principle, be traced out for a convex multiobjective model by varying the weights assigned to criteria in a single objective function, the accuracy of this approximation depends sensitively on the method used to assign weights to the criteria [54]. The most popular method of weight assignment is the linear weighted sum method, but many other schemes also exist [30,31,32,33,35,48,55]. Attempts at finding weighting methods that yield close Pareto front approximations may be tedious and are usually highly problem-specific.…”
Section: Literature Reviewmentioning
confidence: 99%