2020
DOI: 10.3758/s13423-020-01825-5
|View full text |Cite
|
Sign up to set email alerts
|

Artificial cognition: How experimental psychology can help generate explainable artificial intelligence

Abstract: Artificial intelligence powered by deep neural networks has reached a level of complexity where it can be difficult or impossible to express how a model makes its decisions. This black-box problem is especially concerning when the model makes decisions with consequences for human well-being. In response, an emerging field called explainable artificial intelligence (XAI) aims to increase the interpretability, fairness, and transparency of machine learning. In this paper, we describe how cognitive psychologists … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
40
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 52 publications
(46 citation statements)
references
References 83 publications
0
40
0
Order By: Relevance
“…With the above concerns, XAI can help bridge the gap between DL and neuroscience in a mutually beneficial way. On one side, neuroscience and psychology can help build rationalized XAI models that are more easily understood by humankind (Byrne, 2019;Taylor & Taylor, 2020). On the other side, XAI models derived from deep neural networks can also help in understanding the mechanisms of intelligence in the human brain (Fellous et al, 2020;VU et al, 2018).…”
Section: Bridge the Gap Between DL And Neuroscience Via Xaimentioning
confidence: 99%
“…With the above concerns, XAI can help bridge the gap between DL and neuroscience in a mutually beneficial way. On one side, neuroscience and psychology can help build rationalized XAI models that are more easily understood by humankind (Byrne, 2019;Taylor & Taylor, 2020). On the other side, XAI models derived from deep neural networks can also help in understanding the mechanisms of intelligence in the human brain (Fellous et al, 2020;VU et al, 2018).…”
Section: Bridge the Gap Between DL And Neuroscience Via Xaimentioning
confidence: 99%
“…Though very powerful, many AI methods are black boxes in nature, meaning that the inner mechanism to produce outputs in these methods are unknown [ 28 , 29 ]. Obviously, this opacity is an obstacle in AI penetration across many sensitive or high-stake areas such as banking, defense, finance, and medical areas, even in the common industry [ 30 , 31 ].…”
Section: Introductionmentioning
confidence: 99%
“…More recently, as the discipline grows, more specialized works emerged. Reviews on XAI have been related to drug discovery [ 31 ], fintech management [ 35 ], healthcare [ 30 , 33 , 36 ], neurorobotics [ 39 ], pathology [ 28 ], plant biology [ 37 ], and psychology [ 29 ]. Thus, it is necessary to produce an analytical compilation of PHM-XAI works, which is still absent.…”
Section: Introductionmentioning
confidence: 99%
“…Previous studies have demonstrated several standard approaches to assessing human emotional states and cognitive processes. Research [24] discussed the prospect of using different approaches to evaluate cognitive processes in AI, including machine learning. They described the possibility of using machine learning to increase the efficiency of explainable AI in decision-making for the wellbeing of people.…”
Section: Introductionmentioning
confidence: 99%
“…Researchers and designers have long recognized the importance of modeling stress and trust as significant influences on the acceptance and adoption of new technologies. On the basis of the aforementioned studies [6], [7], [8], [16], [24], [25], [26], the standard approaches to evaluate cognitive processes and human emotional states can be divided into the following five main groups: 1) survey to measure qualitative characteristics of an AI system; 2) regression modeling; 3) exploratory and confirmatory factor analysis including TAM; 4) predictive modeling; and 5) advanced machine learning modeling (such as random forests and support vector machines). In many studies, including the present research, user stress based on trust in the AI system depends on the reliability of the system and the success of the task performed.…”
Section: Introductionmentioning
confidence: 99%