The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2024
DOI: 10.1007/s13748-024-00315-2
|View full text |Cite
|
Sign up to set email alerts
|

Explainable machine learning models with privacy

Aso Bozorgpanah,
Vicenç Torra

Abstract: The importance of explainable machine learning models is increasing because users want to understand the reasons behind decisions in data-driven models. Interpretability and explainability emerge from this need to design comprehensible systems. This paper focuses on privacy-preserving explainable machine learning. We study two data masking techniques: maximum distance to average vector (MDAV) and additive noise. The former is for achieving k-anonymity, and the second uses Laplacian noise to avoid record leakag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 42 publications
0
1
0
Order By: Relevance
“…As a result, methods defending the privacy of explainable models have been proposed (Montenegro et al, 2021 ; Nguyen et al, 2023 ; Pentyala et al, 2023 ). The effect of privacy-preserving training methods on explanations is far less studied (Naidu et al, 2021 ; Patel et al, 2022 ; Bozorgpanah and Torra, 2024 ). This present study tackles the lack of work investigating the overall influence of private training on feature-based explanations in deep learning for different data modalities.…”
Section: Related Workmentioning
confidence: 99%
“…As a result, methods defending the privacy of explainable models have been proposed (Montenegro et al, 2021 ; Nguyen et al, 2023 ; Pentyala et al, 2023 ). The effect of privacy-preserving training methods on explanations is far less studied (Naidu et al, 2021 ; Patel et al, 2022 ; Bozorgpanah and Torra, 2024 ). This present study tackles the lack of work investigating the overall influence of private training on feature-based explanations in deep learning for different data modalities.…”
Section: Related Workmentioning
confidence: 99%