MICAI 2007: Advances in Artificial Intelligence
DOI: 10.1007/978-3-540-76631-5_12
|View full text |Cite
|
Sign up to set email alerts
|

G–Indicator: An M–Ary Quality Indicator for the Evaluation of Non–dominated Sets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 4 publications
0
7
0
Order By: Relevance
“…The g-Indicator (G), which was proposed by Lizárraga et al (2008), is a metric to evaluate the diversity and distribution of solutions on the approximated front. In its generalized form, the metric is defined by the union of the hyper-volume covered by hyper-spheres of radius U centered at each point of the Pareto front approximation.…”
Section: Comparison Of Metricsmentioning
confidence: 99%
“…The g-Indicator (G), which was proposed by Lizárraga et al (2008), is a metric to evaluate the diversity and distribution of solutions on the approximated front. In its generalized form, the metric is defined by the union of the hyper-volume covered by hyper-spheres of radius U centered at each point of the Pareto front approximation.…”
Section: Comparison Of Metricsmentioning
confidence: 99%
“…A precision of 0.001 was set for each variable in the phenotype. The three algorithms were run 30 times with each mating-mutation configuration, the average behavior of each configuration was assessed using a version of G − metric [14] to work in Ï, a n-ary quality indicator that ranks P F * known s based on the their attained dispersion and convergence.…”
Section: Testing Rankmoeamentioning
confidence: 99%
“…Since most MCO algorithms have historically operated by iteratively optimizing the underlying MCO problem to exactly sample the Pareto surface, there has been little research into interpolated surface similarity metrics. Instead, most previous surface comparison research has focused on evaluating the relative performance between different Pareto surface interpolations, that is, which of the interpolated surfaces is superior 14–22 . This type of evaluation is not as useful for determining the numerical similarity between the interpolated surfaces, which is more relevant when creating machine learning MCO models aimed at exactly replicating the results of a different MCO algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…Instead, most previous surface comparison research has focused on evaluating the relative performance between different Pareto surface interpolations, that is, which of the interpolated surfaces is superior. [14][15][16][17][18][19][20][21][22] This type of evaluation is not as useful for determining the numerical similarity between the interpolated surfaces, which is more relevant when creating machine learning MCO models aimed at exactly replicating the results of a different MCO algorithm. Particularly in radiation therapy MCO, the vertices on the surface are linearly interpolated to infer possible dose distributions and trade-offs, and most of the previously developed surface comparison techniques do not take this into account.…”
Section: Introductionmentioning
confidence: 99%