2022
DOI: 10.21105/jose.00175
|View full text |Cite
|
Sign up to set email alerts
|

What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components

Abstract: Explainability techniques for data-driven predictive models based on artificial intelligence and machine learning algorithms allow us to better understand the operation of such systems and help to hold them accountable (Sokol & Flach, 2021a). New transparency approaches are developed at breakneck speed, enabling us to peek inside these black boxes and interpret their decisions. Many of these techniques are introduced as monolithic tools, giving the impression of one-size-fits-all and end-to-end algorithms with… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…Misconstrued Explainer Structure. While the separation between explanation content, presentation format, provision mechanism and the act of explaining is gaining traction in XAI, explainability systems remain predominantly perceived as monolithic entities despite in fact being highly modular [72]. This misconception results in a widespread use of off-the-shelf explainers without much thought or effort put into customising these tools and building bespoke XAI systems on top of them, which practice promises to offer more trustworthy and better quality explanations [76].…”
Section: Evaluation Deficienciesmentioning
confidence: 99%
See 1 more Smart Citation
“…Misconstrued Explainer Structure. While the separation between explanation content, presentation format, provision mechanism and the act of explaining is gaining traction in XAI, explainability systems remain predominantly perceived as monolithic entities despite in fact being highly modular [72]. This misconception results in a widespread use of off-the-shelf explainers without much thought or effort put into customising these tools and building bespoke XAI systems on top of them, which practice promises to offer more trustworthy and better quality explanations [76].…”
Section: Evaluation Deficienciesmentioning
confidence: 99%
“…This inspires an alternative, diagnostic conceptualisation of XAI, which focuses on providing users with rigorously tested and well-specified insights into a predictive model instead of attempting to solve the ill-defined "black box" problem [14]. In terms of evaluation, we should thus not only test explainers end-to-end but also validate their individual components independently and provide clear guidelines on how and when to operationalise these tools to guarantee correctness and trustworthiness of their outputs [56,72].…”
Section: Evaluation Deficienciesmentioning
confidence: 99%
“…Further properties are discussed to create its own LIME explanation algorithm in [59]. Noticeably, [59] discusses one of the key restriction of LIME that consists in knowing by advance the relationship between the interpretable space and original space.…”
Section: Difficulties To Set Up a Lime-like Approachmentioning
confidence: 99%
“…Further properties are discussed to create its own LIME explanation algorithm in [59]. Noticeably, [59] discusses one of the key restriction of LIME that consists in knowing by advance the relationship between the interpretable space and original space. [59] indicate that, whenever possible, bijective functions should be considered to limit errors when projecting from the interpretable space to the original one.…”
Section: Difficulties To Set Up a Lime-like Approachmentioning
confidence: 99%
“…LIME is an algorithm that can explain predictions of all classifiers or regressors by local approximation using an interpretable model. It can enhance the reliability of prediction modeling by neural network analysis 6,7 .…”
Section: Introductionmentioning
confidence: 99%