2020
DOI: 10.1101/2020.08.03.235416
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Differential Privacy Protection Against Membership Inference Attack on Machine Learning for Genomic Data

Abstract: Machine learning is powerful to model massive genomic data while genome privacy is a growing concern. Studies have shown that not only the raw data but also the trained model can potentially infringe genome privacy. An example is the membership inference attack (MIA), by which the adversary, who only queries a given target model without knowing its internal parameters, can determine whether a specific record was included in the training dataset of the target model. Differential privacy (DP) has been used to de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(18 citation statements)
references
References 39 publications
0
18
0
Order By: Relevance
“…DP is often obtained by applying a procedure that introduces randomness into the data. DP has been the most widely used method, according to Chen et al [30], to assess privacy exposure relating to persons. In addition, Chen et al [30] has evaluated DP uses and its efficiency as solution to MIA in genomic data.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…DP is often obtained by applying a procedure that introduces randomness into the data. DP has been the most widely used method, according to Chen et al [30], to assess privacy exposure relating to persons. In addition, Chen et al [30] has evaluated DP uses and its efficiency as solution to MIA in genomic data.…”
Section: Discussionmentioning
confidence: 99%
“…Cryptographic techniques can also be used to secure ML models [28,29]. Chen et al [30] assess the effectiveness of using Differential Privacy as a genomic data protection mechanism to minimize the danger of membership inference attacks.…”
Section: Defence Techniques Against Attacks In the Testing/inferring Phasementioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, devising PPTs with fewer parameters for configuration, or possibly self-learning parameter values based on data, is still an open issue in the privacy domain. In recent years, ML has been extensively used to address various privacy concerns emerging from digitization [404]- [407]. Therefore, employing different ML concepts and techniques to secure personal data is an interesting area of research.…”
Section: B Promising Future Research Directionsmentioning
confidence: 99%
“…DP is employed to prevent the leakage of personal information during the stage in a deep learning model [40][41][42]. Moreover, because healthcare data contain privacy-sensitive information, DP is adopted in various deep learning and artificial intelligence systems for healthcare system [43][44][45][46][47].…”
Section: Local Differential Privacy For De-identificationmentioning
confidence: 99%