2023
DOI: 10.1007/978-3-031-27041-3_4
|View full text |Cite
|
Sign up to set email alerts
|

Decentralized Federated Learning: A Defense Against Gradient Inversion Attack

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 9 publications
0
1
0
Order By: Relevance
“…In a decentralized federated learning system, participants communicate with each other without the coordination of an aggregation server. Lu et al [68] proposed a decentralized federated learning (DFL) method to defend against gradient reversal attacks, and demonstrated its security capabilities in a depth gradient leakage (DLG) environment, as shown in Figure 9. Li et al [69] performs a cluster analysis on model parameters to distinguish good and bad models, and then detect potential malicious participants.…”
Section: Build a Trusted Federated Learning Systemmentioning
confidence: 99%
“…In a decentralized federated learning system, participants communicate with each other without the coordination of an aggregation server. Lu et al [68] proposed a decentralized federated learning (DFL) method to defend against gradient reversal attacks, and demonstrated its security capabilities in a depth gradient leakage (DLG) environment, as shown in Figure 9. Li et al [69] performs a cluster analysis on model parameters to distinguish good and bad models, and then detect potential malicious participants.…”
Section: Build a Trusted Federated Learning Systemmentioning
confidence: 99%
“…The study deep leakage from gradients started the conversation for the reconstruction of data from model weights, since then studies like improved data leakage from gradients [23], PGSL [21], Industrial Private AI [5], and SPIN [7] were proposed that leverage model inversion attacks for specific applications. Following the trends, many studies considered derivatives of the aforementioned works to perform data or class label leakages from gradients such as WDLG [37], DEFEAT [38], and GLAUS [39].…”
Section: Model and Feature Inversionmentioning
confidence: 99%