2021
DOI: 10.48550/arxiv.2104.06060
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Model Learning with Personalized Interpretability Estimation (ML-PIE)

Abstract: Figure 1: Schematic view of the proposed approach, ML-PIE. In the implementation proposed in this paper, the user provides feedback on models that are being discovered by an evolutionary algorithm. This feedback is used to train an estimator which, in turn, shapes one of the objective functions used by the evolution. Ultimately, this steers the evolution towards discovering models that are interpretable according to the specific user. To minimize the amount of feedback needed, ML-PIE keeps track of which model… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 45 publications
(57 reference statements)
0
4
0
Order By: Relevance
“…This also relates to one of the motivating factors behind interactive EC -we want something that is mathematically optimised, but also something that corresponds to the problem owner's hard-tocodify intuition. By incorporating XAI into interactive EC we could make it easier for the problem owner to interact with the optimiser, see [65].…”
Section: Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…This also relates to one of the motivating factors behind interactive EC -we want something that is mathematically optimised, but also something that corresponds to the problem owner's hard-tocodify intuition. By incorporating XAI into interactive EC we could make it easier for the problem owner to interact with the optimiser, see [65].…”
Section: Motivationmentioning
confidence: 99%
“…The balance between accuracy and interpretability has been explored in the context of genetic fuzzy systems [25]. In this regard, some recent studies have proposed machine-learned quantifiable measures of interpretability [65], while others [66] have emphasised the importance to focus on low-complexity models, especially in the context of GP. Another important aspect in ML, that is fairness, has been instead addressed in [34], where explicit fairness constraints have been introduced in GP to obtain fair classifiers.…”
Section: Ec For Xaimentioning
confidence: 99%
“…In general, the trade-off between accuracy and simplicity must be considered when evaluating the merits of different models. Furthermore, model simplicity, typically measured as sparsity or model size, is but a proxy for model interpretability; a simple model may still be un-interpretable, or simply wrong [30][31][32]. With these concerns in mind, datasets with ground truth solutions are useful, in that they allow researchers to assess whether or not the symbolic model regressed by a given method corresponds to a known analytical solution.…”
Section: Background and Motivationmentioning
confidence: 99%
“…Different stakeholders have different needs for explanation [12,75], but these needs are not often well-articulated or distinguished from each other [38,41,54,65,84]. Clarity on the intended use of explanation is crucial to select an appropriate XAI tool, as specialized methods exist for specific needs like debugging [39], formal verification (safety) [18,28,85], uncertainty quantification [1,79], actionable recourse [40,76], mechanism inference [20], causal inference [11,26,62], robustness to adversarial inputs [48,52], data accountability [87], social transparency [23], interactive personalization [78], and fairness and algorithmic bias [60] . In contrast, feature importance methods like LIME [66] and SHAP [49,50] focus exclusively on computing quantitative evidence for indicative conditionals [10,30] (of the form "If the applicant doesn't have enough income, then she won't get the loan approved"), with some newer counterfactual explanation methods [8,56,72] and negative contrastive methods [51] finding similar evidence for subjunctive conditionals [14,64] (of the form "If the applicant increases her income, then she would get the loan approved") .…”
Section: The Challengesmentioning
confidence: 99%