2021
DOI: 10.48550/arxiv.2101.00159
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fidel: Reconstructing Private Training Samples from Weight Updates in Federated Learning

Abstract: With the increasing number of data collectors such as smartphones, immense amounts of data are available. Federated learning was developed to allow for distributed learning on a massive scale whilst still protecting each users' privacy. This privacy is claimed by the notion that the centralized server does not have any access to a client's data, solely the client's model update. In this paper, we evaluate a novel attack method within regular federated learning which we name the First Dense Layer Attack (Fidel)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(13 citation statements)
references
References 4 publications
(5 reference statements)
0
11
0
Order By: Relevance
“…The complexity of these architectures vary with respect to the number of layers (depth), the number of neurons in each layer (width), and the type of connections between neurons. Our study shows that researchers tend to use simple architectures to evaluate their attacks in 30 (62%) papers, e.g., 1-layer CNN [33] or 1-layer MLP [12]. Only in 18 (38%) papers, the authors considered complex state-of-the-art CNN models, such as VGG [120], ResNet [52], and DenseNet [57], the winners of the famous ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [110].…”
Section: Fallacies In Evaluation Setupsmentioning
confidence: 99%
See 2 more Smart Citations
“…The complexity of these architectures vary with respect to the number of layers (depth), the number of neurons in each layer (width), and the type of connections between neurons. Our study shows that researchers tend to use simple architectures to evaluate their attacks in 30 (62%) papers, e.g., 1-layer CNN [33] or 1-layer MLP [12]. Only in 18 (38%) papers, the authors considered complex state-of-the-art CNN models, such as VGG [120], ResNet [52], and DenseNet [57], the winners of the famous ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [110].…”
Section: Fallacies In Evaluation Setupsmentioning
confidence: 99%
“…Several model inversion attacks reconstruct the training data by exploiting the shared gradients [33], [136], [141]. In particular, they exploit mathematical properties of gradients in specific model architectures to infer in-formation about the input data.…”
Section: Fallacies In Evaluation Setupsmentioning
confidence: 99%
See 1 more Smart Citation
“…They also proposed a new initialization mechanism to speed up the attack convergence. Unlike previous approaches, Enthoven et al [10] introduced an analytical attack that exploits fully-connected layers to reconstruct the input data on the server side, and they extend this exploitation to CNNs. Recently, Zhu et al [40] proposed a recursive closed-form attack.…”
Section: Data Reconstructionmentioning
confidence: 99%
“…They also propose a new initialization mechanism to speed up the attack convergence. Unlike previous approaches, Enthoven et al [8] introduce an analytical attack that exploits fully-connected layers to reconstruct the input data on the server side, and they extend this exploitation to CNNs. Recently, Zhu et al [36] propose a recursive closed-form attack.…”
Section: Related Workmentioning
confidence: 99%