2020
DOI: 10.1109/mc.2020.2993623
|View full text |Cite
|
Sign up to set email alerts
|

Cybertrust: From Explainable to Actionable and Interpretable Artificial Intelligence

Abstract: To benefit from AI advances, users and operators of AI systems must have reason to trust it. Trust arises from multiple interactions, where predictable and desirable behavior is reinforced over time. Providing the system's users with some understanding of AI operations can support predictability, but forcing AI to explain itself risks constraining AI capabilities to only those reconcilable with human cognition. We argue that AI systems should be designed with features that build trust by bringing decision-anal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 32 publications
(15 citation statements)
references
References 9 publications
0
15
0
Order By: Relevance
“…However, Figures 8, 9 suggest very little interdisciplinary engagement with these topics by AI researchers from both the social science and technical domains (The exception is the combination of risk assessment and networks that appear in the aforementioned graphs.) This "governance gap" in AI research exists despite growing concern by social science researchers about AI governance issues (Wachter et al, 2017;Winfield et al, 2018;Linkov et al, 2020). .…”
Section: Governance Gap Between the Social Science And Technical Domainsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, Figures 8, 9 suggest very little interdisciplinary engagement with these topics by AI researchers from both the social science and technical domains (The exception is the combination of risk assessment and networks that appear in the aforementioned graphs.) This "governance gap" in AI research exists despite growing concern by social science researchers about AI governance issues (Wachter et al, 2017;Winfield et al, 2018;Linkov et al, 2020). .…”
Section: Governance Gap Between the Social Science And Technical Domainsmentioning
confidence: 99%
“…Advances in artificial intelligence (AI) have expanded its adoption in computer security of defense and financial systems, economics, education, and many other fields (Wachter et al, 2017;Winfield et al, 2018;Linkov et al, 2020). Emerging technologies like AI will eventually contend with regulatory pressure and public attention that may either hinder emerging technologies or stimulate their development.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…While the rule-based approaches of early AI were comprehensible “glass box” approaches, at least in narrow domains, their weakness lay in dealing with the uncertainties of the real world 7 . Actionable Explainable AI (AXAI) is intended to help promote trust-building features by bringing decision analytic perspectives and human domain knowledge directly into the AI pipeline 8 , 9 .…”
Section: Introductionmentioning
confidence: 99%
“…• Be transparent: the user needs to understand how it works and the power of the AI (explainability AI) [153]. Explainability concerns the ability to explain both the technical processes of an AI system and the related decisions.…”
Section: Knowledge Layer: Knowledge-based Servicesmentioning
confidence: 99%