Proceedings of the Genetic and Evolutionary Computation Conference Companion 2021
DOI: 10.1145/3449726.3463166
|View full text |Cite
|
Sign up to set email alerts
|

Model learning with personalized interpretability estimation (ML-PIE)

Abstract: Figure 1: Schematic view of the proposed approach, ML-PIE. In the implementation proposed in this paper, the user provides feedback on models that are being discovered by an evolutionary algorithm. This feedback is used to train an estimator which, in turn, shapes one of the objective functions used by the evolution. Ultimately, this steers the evolution towards discovering models that are interpretable according to the specific user. To minimize the amount of feedback needed, ML-PIE keeps track of which model… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 36 publications
0
10
0
Order By: Relevance
“…𝛼-dom. ), and a simple extension of NSGA-II as mentioned in [79], where non-dominated sorting assigns an artificial worst-possible rank to duplicate solutions. We refer to the latter as NSGA-II with penalization of duplicates, NSGA-II+PD in short.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…𝛼-dom. ), and a simple extension of NSGA-II as mentioned in [79], where non-dominated sorting assigns an artificial worst-possible rank to duplicate solutions. We refer to the latter as NSGA-II with penalization of duplicates, NSGA-II+PD in short.…”
Section: Methodsmentioning
confidence: 99%
“…For NSGA-II applied to discrete optimization, in [30] strategies are explored to remove duplicate solutions from the population. One such strategy is used for MOGP in [79], where NSGA-II is modified so that duplicate solutions are assigned the lowest priority to survive selection. Together with classic NSGA-II, SPEA2, and the 𝛼-dominance based algorithms, we also include this algorithm in our comparisons.…”
Section: Prior Work On Improving Nsga-ii For Gpmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, researchers should strive to include user studies, to assess whether the proposed objective (interpretability, trust, etc.) obtains the desired effect in practice [32].…”
Section: Open Problems and Possible Directions In Gp For Imlmentioning
confidence: 97%
“…In our experience in clinical applications [28,29,30], obtaining a good objective function of what the user truly needs requires several sessions of interaction. A promising direction here is to use ML itself to learn what specific users find to be more or less interpretable in an automatic fashion [31,32]. Moreover, researchers should strive to include user studies, to assess whether the proposed objective (interpretability, trust, etc.)…”
Section: Open Problems and Possible Directions In Gp For Imlmentioning
confidence: 99%