2019
DOI: 10.48550/arxiv.1909.06342
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explainable Machine Learning in Deployment

Abstract: Explainable machine learning seeks to provide various stakeholders with insights into model behavior via feature importance scores, counterfactual explanations, and influential samples, among other techniques. Recent advances in this line of work, however, have gone without surveys of how organizations are using these techniques in practice. This study explores how organizations view and use explainability for stakeholder consumption. We find that the majority of deployments are not for end users affected by t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 30 publications
(41 reference statements)
0
8
0
Order By: Relevance
“…The question of practically using machine learning explainability has been poorly covered in the existing literature. One notable exception is Bhatt et al [3], which study the use of explanation methods in practice and thus show that, currently, they are mainly used by machine learning engineers in an ad hoc way as sanity checks for the models they build and deploy. One reason identified by the authors is that organizations lack frameworks for making decisions regarding explainability, leaving these methods only understandable by people with a background in machine learning and obscure to others.…”
Section: Implementing Explainability: Current State Of the Literaturementioning
confidence: 99%
See 3 more Smart Citations
“…The question of practically using machine learning explainability has been poorly covered in the existing literature. One notable exception is Bhatt et al [3], which study the use of explanation methods in practice and thus show that, currently, they are mainly used by machine learning engineers in an ad hoc way as sanity checks for the models they build and deploy. One reason identified by the authors is that organizations lack frameworks for making decisions regarding explainability, leaving these methods only understandable by people with a background in machine learning and obscure to others.…”
Section: Implementing Explainability: Current State Of the Literaturementioning
confidence: 99%
“…Typical categorizations of stakeholders are based on their role in an organization [3,9,12,16], their machine learning experience [18] or a combination of the two [15]. Also for the categorization of the stakeholder needs regarding explainability, different propositions are made in the literature.…”
Section: Understanding Stakeholder Needsmentioning
confidence: 99%
See 2 more Smart Citations
“…We add to a growing literature in computer science that studies algorithmic audits and derives specific explainability techniques from axioms about their deployment-agnostic properties (e.g. Bhatt et al, 2020;Carvalho et al, 2019;Chen et al, 2018;Doshi-Velez and Kim, 2017;Guidotti et al, 2018;Hashemi and Fathi, 2020;Lundberg and Lee, 2017;Murdoch et al, 2019;Ribeiro et al, 2016a). In particular, Lakkaraju and Bastani (2020), Slack et al (2020), andLakkaraju et al (2019) study the limitations of post-hoc explanation tools in providing useful and accurate descriptions of the underlying models, and show that simple explanations can be inadequate in distinguishing relevant model behavior.…”
Section: Introductionmentioning
confidence: 99%