2023
DOI: 10.1038/s42256-023-00711-8
|View full text |Cite
|
Sign up to set email alerts
|

From attribution maps to human-understandable explanations through Concept Relevance Propagation

Reduan Achtibat,
Maximilian Dreyer,
Ilona Eisenbraun
et al.

Abstract: The field of explainable artificial intelligence (XAI) aims to bring transparency to today’s powerful but opaque deep learning models. While local XAI methods explain individual predictions in the form of attribution maps, thereby identifying ‘where’ important features occur (but not providing information about ‘what’ they represent), global explanation techniques visualize what concepts a model has generally learned to encode. Both types of method thus provide only partial insights and leave the burden of int… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(18 citation statements)
references
References 45 publications
0
6
0
Order By: Relevance
“…Attribution based methods compute saliency maps, indicating how much each pixel contributed to the prediction (Ribeiro et al 2016, Zhou et al 2019 , Selvaraju et al 2017, Chattopadhyay et al 2017, Ramaswamy et al 2020, Ali et al 2021). This is achieved by computing the attention of inner layers of the model by aggregating their activations, or gradients, for each pixel (Bach et al 2015, Achtibat et al 2023, Gur et al 2021). Accordingly, saliency maps visualize localized regions particularly important for the classification.…”
Section: Discussionmentioning
confidence: 99%
“…Attribution based methods compute saliency maps, indicating how much each pixel contributed to the prediction (Ribeiro et al 2016, Zhou et al 2019 , Selvaraju et al 2017, Chattopadhyay et al 2017, Ramaswamy et al 2020, Ali et al 2021). This is achieved by computing the attention of inner layers of the model by aggregating their activations, or gradients, for each pixel (Bach et al 2015, Achtibat et al 2023, Gur et al 2021). Accordingly, saliency maps visualize localized regions particularly important for the classification.…”
Section: Discussionmentioning
confidence: 99%
“…However, it is equally important to go beyond mere identification and strive for explanations at the conceptual level. Conceptual-level explanations (consistent with theoretical understanding) aim to reveal the underlying concepts or higher-level abstractions that contribute to the observed feature importance . This type of explanation can provide deeper insights and a more comprehensive understanding of the model’s behavior.…”
Section: Challenges and Future Directionsmentioning
confidence: 94%
“…We employed Concept Relevant Propagation (CRP) (38) as a mean to assess whether the ResNet-50 utilized similar concepts, represented by convolution filters in CNNs, to classify images belonging to the same class, regardless of the dataset it was trained on. This analysis was crucial to ensure that the classifier employed consistent strategies in identifying the tissue of the tile, regardless of its origin (real or generated).…”
Section: Methodsmentioning
confidence: 99%
“…The second set evaluates the practical usability of the generated images in deep learning models. To complement this set we investigated the convolutional filters learned by the classifiers, trained on both real and generated data, using the Concept Relevance Propagation (CRP) algorithm (38), an explainable artificial intelligence approach.…”
Section: Introductionmentioning
confidence: 99%