2023
DOI: 10.1109/tsc.2023.3250705
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-Preserving and Reliable Decentralized Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 36 publications
0
0
0
Order By: Relevance
“…In addressing the vulnerabilities associated with model performance and data privacy in FL systems, various research efforts have been undertaken. For instance, a quality-based aggregation method combined with local differential privacy is proposed to preserve both model accuracy and data privacy amid potential adversarial attacks on FL, as demonstrated in [15]. A systematic analysis of secure FL applications outlined in [16] highlights the significance of countering security threats to uphold user privacy and model integrity.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In addressing the vulnerabilities associated with model performance and data privacy in FL systems, various research efforts have been undertaken. For instance, a quality-based aggregation method combined with local differential privacy is proposed to preserve both model accuracy and data privacy amid potential adversarial attacks on FL, as demonstrated in [15]. A systematic analysis of secure FL applications outlined in [16] highlights the significance of countering security threats to uphold user privacy and model integrity.…”
Section: Related Workmentioning
confidence: 99%
“…FL's decentralized nature not only ensures the efficient processing of distributed IIoT data but also addresses some of the most pressing privacy concerns by allowing numerous compute nodes to collaboratively hone a shared global model without exposing individual datasets [1], [2]. In addition, FL leverages distributed computational resources to overcome the limitations of a central server, thereby enhancing scalability and responsiveness in IIoT environments [3], [4]. However, while FL is inherently more privacy-preserving than centralized approaches, it is not immune to security threats, such as model poisoning and Sybil attacks, which jeopardize the integrity of collective model training in the IIoT context [5], [6].…”
Section: Introductionmentioning
confidence: 99%