2022
DOI: 10.1109/access.2022.3157589
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-Preserving Case-Based Explanations: Enabling Visual Interpretability by Protecting Privacy

Abstract: Deep Learning achieves state-of-the-art results in many domains, yet its black-box nature limits its application to real-world contexts. An intuitive way to improve the interpretability of Deep Learning models is by explaining their decisions with similar cases. However, case-based explanations cannot be used in contexts where the data exposes personal identity, as they may compromise the privacy of individuals. In this work, we identify the main limitations and challenges in the anonymization of casebased exp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“… Montenegro et al (2021a) propose a generative model to privatize case-based explanations, as well as a way to derive counterfactual explanations. However, the authors later applied the method to glaucoma detection, revealing several drawbacks for the application in medical practice ( Montenegro et al, 2021b ; 2022 ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“… Montenegro et al (2021a) propose a generative model to privatize case-based explanations, as well as a way to derive counterfactual explanations. However, the authors later applied the method to glaucoma detection, revealing several drawbacks for the application in medical practice ( Montenegro et al, 2021b ; 2022 ).…”
Section: Methodsmentioning
confidence: 99%
“…However, their increasing relevance raises important questions about the impact they have on the privacy of an AI system. Most previous works investigating the interdependency of privacy and explainability focused on low-level XAI methods based on input feature attribution ( Milli et al, 2019 ; Shokri et al, 2021 ; Saifullah et al, 2022 ), while Montenegro et al (2022) investigated privacy-preserving case-based explanations. We argue that more complex XAI methods have received too little attention when it comes to an assessment of their implication on privacy.…”
Section: Introductionmentioning
confidence: 99%
“…The works of Montenegro et al (2021Montenegro et al ( , 2022 create privacy-preserving case-based explanations with a generative adversarial network (GAN) that anonymize an input image by generating an image of a similar example. The anonymized images are used as explanations of an original input image because they serve as a kind of similar example.…”
Section: Dataset Anonymization For Imagesmentioning
confidence: 99%
“…Separate work claimed private data demands a right to an explanation, particularly in cases of closed‐source data and algorithms in industry (Kim & Routledge, 2018). Privacy in machine learning more generally is gaining attention (Liu et al, 2021), and explainable dataset anonymization has few prior works (Montenegro et al, 2022). Moreover, existing works do not consider compression.…”
Section: Motivation and Survey Of Related Workmentioning
confidence: 99%