2021
DOI: 10.1145/3436755
|View full text |Cite
|
Sign up to set email alerts
|

When Machine Learning Meets Privacy

Abstract: The newly emerged machine learning (e.g., deep learning) methods have become a strong driving force to revolutionize a wide range of industries, such as smart healthcare, financial technology, and surveillance systems. Meanwhile, privacy has emerged as a big concern in this machine learning-based artificial intelligence era. It is important to note that the problem of privacy preservation in the context of machine learning is quite different from that in traditional data privacy protection, as machine learning… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
83
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 224 publications
(85 citation statements)
references
References 153 publications
0
83
0
Order By: Relevance
“…As discussed earlier that it may cause privacy concerns among users and data owners. Thereafter, traditional centralized healthcare applications find limited applicability due to privacy concerns [39,40,41]. To address the privacy issues in machine learning, researchers have been working on Federated learning (FL) and Transfer learning (TF).…”
Section: Machine Learning In Healthcarementioning
confidence: 99%
“…As discussed earlier that it may cause privacy concerns among users and data owners. Thereafter, traditional centralized healthcare applications find limited applicability due to privacy concerns [39,40,41]. To address the privacy issues in machine learning, researchers have been working on Federated learning (FL) and Transfer learning (TF).…”
Section: Machine Learning In Healthcarementioning
confidence: 99%
“…NB: This article is not meant to present a comprehensive survey of literature in the field, nor an exhaustive list of all threat models and attacks of privacy in machine learning; interested readers may refer to existing surveys, e.g., [1].…”
Section: Prologuementioning
confidence: 99%
“…For what concerns the methods for making ML models privacy-aware/compliant, they can be divided into anonymization techniques, perturbation techniques, and distributed protocols [28,87,88]. The anonymization techniques try to maintain the privacy of the data subjects by obscuring personally identifying information within a dataset while preserving data utility.…”
Section: Related Workmentioning
confidence: 99%