2023
DOI: 10.1016/j.ins.2023.02.025
|View full text |Cite
|
Sign up to set email alerts
|

Model poisoning attack in differential privacy-based federated learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(8 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…More generally, the fact that the official network is, by design, produced to assist the secret network in extracting information might allow the framework to find ways around defence mechanisms that have been proven successful against similar attacks, such as model inversion ones. On the other hand, differential privacy [18], together with strategies to decouple data from model training [14], should prove successful in protecting against it.…”
Section: Discussionmentioning
confidence: 99%
“…More generally, the fact that the official network is, by design, produced to assist the secret network in extracting information might allow the framework to find ways around defence mechanisms that have been proven successful against similar attacks, such as model inversion ones. On the other hand, differential privacy [18], together with strategies to decouple data from model training [14], should prove successful in protecting against it.…”
Section: Discussionmentioning
confidence: 99%
“…In gradient ascent attacks, the attacker updates the model in the direction that maximizes the loss. The model shuffling attack aims at shuffling the model parameters without notably changing the loss [28].…”
Section: B Poisoning Attacks In Federated Learningmentioning
confidence: 99%
“…Generally, most of the federated learning studies focus on looking for the best compromise between privacy and utility [ 63 ]. However, another crucial issue that must be considered is the malicious security threats encountered by federated learning [ 64 ].…”
Section: Software-based Solutionsmentioning
confidence: 99%