2022 IEEE Conference on Communications and Network Security (CNS) 2022
DOI: 10.1109/cns56114.2022.9947237
|View full text |Cite
|
Sign up to set email alerts
|

Network-Level Adversaries in Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…For detail, see a survey paper, such as [3]. As another direction of the vulnerability issue, Severi et al pointed out the possibility of network-based adversarial attacks and explored defense methods [32].…”
Section: B Vulnerability Issue On the Global Modelmentioning
confidence: 99%
“…For detail, see a survey paper, such as [3]. As another direction of the vulnerability issue, Severi et al pointed out the possibility of network-based adversarial attacks and explored defense methods [32].…”
Section: B Vulnerability Issue On the Global Modelmentioning
confidence: 99%
“…More specifically, backdoor attacks studied in centralized Federated Learning [25], [28]. In addition to classical poisoning attacks, recent work on network-level adversaries in federated learning showed that adversaries might cleverly drop network packets and significantly reduce the model's performance on sub-populations [65]. Edge federated learning.…”
Section: Related Workmentioning
confidence: 99%
“…However, securing the network and in-transit data is an essential ingredient to allow privacy preservation in an FL setup. Only a few works in the literature have focused their efforts on investigating networklevel risks and countermeasures in FL [1]. In the present paper, a networking approach is proposed to target privacy preservation in K8s-based FL.…”
Section: A Privacy Preservation In Flmentioning
confidence: 99%
“…Federated Learning (FL) is a popular Machine Learning (ML) technique for training models on decentralised, sensitive data while preserving data privacy [1]. This paradigm allows local nodes to collaboratively train a shared model and use it for a given task (e.g., classification or regression) while keeping the data locally without sharing their direct information with the FL server [2].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation