2022 ACM Conference on Fairness, Accountability, and Transparency 2022
DOI: 10.1145/3531146.3533153
|View full text |Cite
|
Sign up to set email alerts
|

Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(8 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…First, interpreting SHAP values warrants caution 57 , as is generally the case for methods using model explanations in medicine 58,59 . Unstable explanations are not uncommon for complex models trained on large datasets 60,61 . While the ranking of importance may fluctuate, features with higher mean absolute SHAP values generally maintain consistent attributions.…”
Section: Limitations and Recommendationmentioning
confidence: 99%
“…First, interpreting SHAP values warrants caution 57 , as is generally the case for methods using model explanations in medicine 58,59 . Unstable explanations are not uncommon for complex models trained on large datasets 60,61 . While the ranking of importance may fluctuate, features with higher mean absolute SHAP values generally maintain consistent attributions.…”
Section: Limitations and Recommendationmentioning
confidence: 99%
“…The duty of care of an AI provider focuses on demonstrating to a mandated supervisory body that the system was compliant only with its instructions for use and documentation. This likely invalidates the possibility for end-users of interpreting system outputs or receiving explanations as a burden of proof under litigation [19].…”
Section: Ai Liability Directivementioning
confidence: 99%
“…The advantage of post hoc methods is that they can be used with any model architecture and, therefore, do not require a trade-off of predictive model performance. However, such approaches explain model decisions on specified data points and do not provide a holistic view of the entire decision function (Bordt et al, 2022). An alternative approach is to use intrinsically explainable machine learning models (Rudin, 2019).…”
Section: Introductionmentioning
confidence: 99%