2020
DOI: 10.48550/arxiv.2004.04676
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 0 publications
0
17
0
Order By: Relevance
“…We have also provided results for the widely used LFPW [1] dataset (Table 2). With our EF+CB baseline we could not replicate the numbers reported by [2]. (This could be due to the fact that we could not obtain the whole dataset.)…”
Section: B Distributed Landmark Localization Algorithmmentioning
confidence: 83%
See 1 more Smart Citation
“…We have also provided results for the widely used LFPW [1] dataset (Table 2). With our EF+CB baseline we could not replicate the numbers reported by [2]. (This could be due to the fact that we could not obtain the whole dataset.)…”
Section: B Distributed Landmark Localization Algorithmmentioning
confidence: 83%
“…point [2]. However, this approach has other advantages, including distribution of the required processing power needed for machine learning, which eliminates the need for heavy central servers, as well as reduction of data communication bandwidth requirements for transferring large amounts of data between nodes.…”
mentioning
confidence: 99%
“…Privacy and Security in FL. Enthoven et al [32] presented a structured overview about privacy attacks and defense mechanisms in FL, but only for deep learning models. Lyu et al [83] elaborated additionally on security attacks and pointed out weaknesses in current countermeasures through a qualitative analysis of the literature.…”
Section: Review Studiesmentioning
confidence: 99%
“…Enthoven and Al-Ars [131] summarize most defence strategies used in federated learning, which can be categorized into three types: 1) Subsample or compress the communicated gradients [5,6,132]; 2) Differential privacy and SMC [18], and 3) Robustness aggregation [133] using e.g., the byzantine resilient aggregation rule [134,135].…”
Section: Adversarial Federated Neural Architecture Searchmentioning
confidence: 99%