2022
DOI: 10.48550/arxiv.2202.05302
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient

Abstract: The problem of human trust in artificial intelligence is one of the most fundamental problems in applied machine learning. Our processes for evaluating AI trustworthiness have substantial ramifications for ML's impact on science, health, and humanity, yet confusion surrounds foundational concepts. What does it mean to trust an AI, and how do humans assess AI trustworthiness? What are the mechanisms for building trustworthy AI? And what is the role of interpretable ML in trust?Here, we draw from statistical lea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 47 publications
0
6
0
Order By: Relevance
“…Pseudocode of model-agnostic meta-learning with evolution strategies for AI-system resilience optimization. (39). The level of perturbation is limited by the L °-norm or L 0 0 L -norm.…”
Section: Methods Of Ensuring the Resilience Of Ai Systemsmentioning
confidence: 99%
See 2 more Smart Citations
“…Pseudocode of model-agnostic meta-learning with evolution strategies for AI-system resilience optimization. (39). The level of perturbation is limited by the L °-norm or L 0 0 L -norm.…”
Section: Methods Of Ensuring the Resilience Of Ai Systemsmentioning
confidence: 99%
“…In addition to Uncertainty Monitoring, the Explainable AI mechanism can be used to assist decision-making by the human to whom control is delegated in case of uncertainty. The article ( 39 ) questions the necessity and adequacy of existing methods of explaining decisions, so the explanation mechanism will be excluded from further consideration, but for generality, the diagram shows this MLOps stage.…”
Section: Architecting Resilient Mlops-based Medical Diagnostic Systemmentioning
confidence: 99%
See 1 more Smart Citation
“…Our work solves this issue by introducing the first interpretable concept-based model that learns logic rules from concept embeddings. Our approach draws from t-norm fuzzy logic learning paradigms (Diligenti et al, 2021;Badreddine et al, 2022;van Krieken et al, 2022)…”
Section: Key Findings and Significancementioning
confidence: 99%
“…Concept-based models (Kim et al, 2018;Chen et al, 2020) aim to increase human trust in deep learning models by using human-understandable concepts to train interpretable models-such as logistic regression or decision trees (Rudin, 2019;Koh et al, 2020;Kazhdan et al, 2020) (Figure 1). This approach significantly increases human trust in the AI predictor (Rudin, 2019;Shen, 2022) as it allows users to clearly understand a model's decision process. However, state-of-the-art concept-based models, which rely on concept embeddings (Yeh et al, 2020;Kazhdan et al, 2020;Mahinpei et al, 2021;Espinosa Zarlenga et al, 2022) to attain high performance, are not completely interpretable.…”
Section: Introductionmentioning
confidence: 99%