2022
DOI: 10.48550/arxiv.2205.15581
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Comparing interpretation methods in mental state decoding analyses with deep learning models

Abstract: Deep learning (DL) methods find increasing application in mental state decoding, where researchers seek to understand the mapping between mental states (such as accepting or rejecting a gamble) and brain activity, by identifying those brain regions (and networks) whose activity allows to accurately identify (i.e., decode) these states. Once DL models have been trained to accurately decode a set of mental states, neuroimaging researchers often make use of interpretation methods from explainable artificial intel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 57 publications
(93 reference statements)
0
1
0
Order By: Relevance
“…While the proposed frameworks perform well in training models that generalize to new neuroimaging datasets, this work currently neglects another key aspect of mental state decoding, namely, the ability to draw inferences about the association between decoded mental states and brain activity from the trained models. First empirical evidence indicates that attribution methods from explainable artificial intelligence [XAI; 33] research are well-suited to provide insights in the mental state decoding decisions of DL models [34,35]. Yet, further research is needed to better understand the limits of these types of attribution methods in mental state decoding.…”
Section: Discussionmentioning
confidence: 99%
“…While the proposed frameworks perform well in training models that generalize to new neuroimaging datasets, this work currently neglects another key aspect of mental state decoding, namely, the ability to draw inferences about the association between decoded mental states and brain activity from the trained models. First empirical evidence indicates that attribution methods from explainable artificial intelligence [XAI; 33] research are well-suited to provide insights in the mental state decoding decisions of DL models [34,35]. Yet, further research is needed to better understand the limits of these types of attribution methods in mental state decoding.…”
Section: Discussionmentioning
confidence: 99%