2015
DOI: 10.1109/tevc.2014.2303783
|View full text |Cite
|
Sign up to set email alerts
|

Learning Value Functions in Interactive Evolutionary Multiobjective Optimization

Abstract: This paper proposes an interactive multi-objective evolutionary algorithm (MOEA) that attempts to learn a value function capturing the users' true preferences. At regular intervals, the user is asked to rank a single pair of solutions. This information is used to update the algorithm's internal value function model, and the model is used in subsequent generations to rank solutions incomparable according to dominance. This speeds up evolution towards the region of the Pareto front most desirable to the user. We… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
61
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 95 publications
(63 citation statements)
references
References 47 publications
(62 reference statements)
0
61
0
Order By: Relevance
“…In tandem with algorithmic advances, this has spurred renewed interest in Interactive Evolutionary Algorithms, which have been successfully applied to elicit user preferences and knowledge in many areas from design to art 51,52 . Recent results suggest a useful synergy, with periodic user interaction to incorporate preferences helping to focus search down to a more manageable set of dimensions 80 . Importantly, this involves eliciting user preferences in response to what is discovered to be possible, rather than a priori.…”
Section: Automated Design and Tuning Of Easmentioning
confidence: 99%
“…In tandem with algorithmic advances, this has spurred renewed interest in Interactive Evolutionary Algorithms, which have been successfully applied to elicit user preferences and knowledge in many areas from design to art 51,52 . Recent results suggest a useful synergy, with periodic user interaction to incorporate preferences helping to focus search down to a more manageable set of dimensions 80 . Importantly, this involves eliciting user preferences in response to what is discovered to be possible, rather than a priori.…”
Section: Automated Design and Tuning Of Easmentioning
confidence: 99%
“…[5], are in principle the most accurate, because the algorithm is able to calculate the most representative value function with respect to the user preferences. DM preferences are specified at each interaction.…”
Section: Optimization With User Preferencesmentioning
confidence: 99%
“…Two approaches have been explored: direct integration of the DM in the optimization process [3,4], by giving them the possibility to insert some reference point, and real-time interaction with the optimization algorithm [5], by expressing their preference each time after a defined number of iterations. The target point method follows the first approach.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, many current interactive methods still depend on the preference model, which is used to identify the region of interest (Chaudhuri and Deb, 2010;Sinha et al, 2014) or refine the approximation of the Pareto front (Klamroth and Miettinen, 2008). Other studies built designer preference interactively by query to the designer (Pedro and Takahashi, 2013) or comparison of pairwise solutions by the designer (Branke et al, 2015;2016); in these schemes the designer is provided with only fractional information, instead of a big picture of the optimization potential in the current situation. The visualization of Pareto-optimal solutions is also often studied in specialized topics so that the designer can make decisions based on that visual information (Kollat and Reed, 2007;Blasco et al, 2008).…”
Section: Introductionmentioning
confidence: 99%