2019 IEEE Symposium on Security and Privacy (SP) 2019
DOI: 10.1109/sp.2019.00029
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Unintended Feature Leakage in Collaborative Learning

Abstract: Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points-for example, specific … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
878
1
3

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 1,121 publications
(971 citation statements)
references
References 46 publications
2
878
1
3
Order By: Relevance
“…Future research should thus seek to explore the application of DLT-based federated learning to more complex AI models and investigate ways to reduce the induced performance overhead in real-world application scenarios. Furthermore, despite increased confidentiality, research has also shown that federated learning is potentially vulnerable to inference attacks, whereby an adversary can aim to extract information about private training data by inferring the AI model multiple times (Melis et al 2019;Wang et al 2019). In addition to employing DLT for preserving training data provenance and AI model integrity, future research should therefore also explore how DLT could help with preventing inference attacks on federated learning networks.…”
Section: Dlt-based Federated Learningmentioning
confidence: 99%
“…Future research should thus seek to explore the application of DLT-based federated learning to more complex AI models and investigate ways to reduce the induced performance overhead in real-world application scenarios. Furthermore, despite increased confidentiality, research has also shown that federated learning is potentially vulnerable to inference attacks, whereby an adversary can aim to extract information about private training data by inferring the AI model multiple times (Melis et al 2019;Wang et al 2019). In addition to employing DLT for preserving training data provenance and AI model integrity, future research should therefore also explore how DLT could help with preventing inference attacks on federated learning networks.…”
Section: Dlt-based Federated Learningmentioning
confidence: 99%
“…The other privacy leakage that has been revealed is the membership inference. For example, Melis et al [18] demonstrated the membership inference attack on federated learning by observing the gradients aggregated from the model trained on clients. It is not surprising to envision such a membership inference attack could be applicable to split learning.…”
Section: Related Workmentioning
confidence: 99%
“…In a federated learning setup, instead of sharing data, clients share models. Though this seems to provide increased privacy, there has been a multitude of privacy attacks, including reconstruction and membership inference attacks [18][16] [14], demonstrating that additional privacy is required in the form of protecting model parameters. Cryptographic techniques such as secure multiparty computation (MPC) [5,9,20] guarantee that clients don't learn anything except the final cumulative model weight.…”
Section: Related Workmentioning
confidence: 99%