The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
1996
DOI: 10.1007/3-540-61723-x_1022
|View full text |Cite
|
Sign up to set email alerts
|

On the performance assessment and comparison of stochastic multiobjective optimizers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
182
0
1

Year Published

2009
2009
2018
2018

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 295 publications
(184 citation statements)
references
References 2 publications
1
182
0
1
Order By: Relevance
“…In Fonseca and Fleming [10], the performance of an SLS algorithm for multiobjective problems is associated with the probability of attaining (dominating or being equal to) an arbitrary point in the objective space in one single run. This function is called attainment function [36] and it can be seen as a generalization of the distribution function of solution cost [1] to the multiobjective case.…”
Section: Performance Assessment Methodologymentioning
confidence: 99%
See 1 more Smart Citation
“…In Fonseca and Fleming [10], the performance of an SLS algorithm for multiobjective problems is associated with the probability of attaining (dominating or being equal to) an arbitrary point in the objective space in one single run. This function is called attainment function [36] and it can be seen as a generalization of the distribution function of solution cost [1] to the multiobjective case.…”
Section: Performance Assessment Methodologymentioning
confidence: 99%
“…Instead, we employ a sound methodology that follows three steps. In a first step, the outcomes of algorithms are compared pairwise with respect to outperformance relations [9]; if these comparisons do not yield clear conclusions, we compute in a next step the attainment functions to detect significant differences between sets of outcomes [10,11]. If such differences are detected, the usage of graphical illustrations is used in a third step to examine the areas in the objective space where the results of two algorithms differ more strongly [12].…”
Section: Introductionmentioning
confidence: 99%
“…The experimentation also includes attainment surfaces (Fonseca and Fleming 1996) to allow an easy visual comparison of the performance of the algorithms. In addition, we also use the robustness visualization model proposed in Chica et al (2013) for answering the question about how robust a Pareto front is.…”
Section: Multiobjective Performance and Robustness Indicatorsmentioning
confidence: 99%
“…We used the hypervolume of the non- dominated space at each generation as a performance metric since DAKOTA uses this metric to guide the optimization. The hypervolume has been shown to be an effective metric in comparing the performance of various EAs and has also been shown to be safer than many other metrics in that it is Pareto-compliant (Fonseca et al, 2005;Zitzler and Thiele, 1999;Minella et al, 2008). Paretocompliancy indicates that the metric is not susceptible to cases where, when comparing two Pareto front approximations, the front the metric identifies as superior is actually the worse of the two.…”
Section: Optimization Performancementioning
confidence: 99%