Proceedings 2020 Workshop on Decentralized IoT Systems and Security 2020
DOI: 10.14722/diss.2020.23003
|View full text |Cite
|
Sign up to set email alerts
|

Poisoning Attacks on Federated Learning-based IoT Intrusion Detection System

Abstract: Federated Learning (FL) is an appealing method for applying machine learning to large scale systems due to the privacy and efficiency advantages that its training mechanism provides. One important field for FL deployment is emerging IoT applications. In particular, FL has been recently used for IoT intrusion detection systems where clients, e.g., a home security gateway, monitors traffic data generated by IoT devices in its network, trains a local intrusion detection model, and send this model to a central ent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
58
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 120 publications
(69 citation statements)
references
References 22 publications
(32 reference statements)
0
58
0
Order By: Relevance
“…As poisoning attacks against FL systems continue to emerge as important research topics in the security and ML communities [14,4,33,56,21], we plan to continue our work in several ways. First, we will study the impacts of the attack and defense on diverse FL scenarios differing in terms of data size, distribution among FL participants (iid vs non-iid), data type, total number of instances available per class, etc .…”
Section: Discussionmentioning
confidence: 99%
“…As poisoning attacks against FL systems continue to emerge as important research topics in the security and ML communities [14,4,33,56,21], we plan to continue our work in several ways. First, we will study the impacts of the attack and defense on diverse FL scenarios differing in terms of data size, distribution among FL participants (iid vs non-iid), data type, total number of instances available per class, etc .…”
Section: Discussionmentioning
confidence: 99%
“…A jointly learned model can be easily backdoored when a very small number of participants are compromised or controlled by an attacker. Both local data poisoning [70] and model poisoning [11], [71], [72] can be carried out by the attacker to implant backdoor into the joint model. We consider the data encryption such as CryptoNet [73], SecureML [74] and CryptoNN [75] under this backdoor attack surface, which trains the model over encrypted data in order to protect data privacy.…”
Section: ) Collaborative Learningmentioning
confidence: 99%
“…Federated learning [11], [71], [72], (IoT application [70]); Federated learning with distributed backdoor [119]; Federated meta-learning [120]; feature-partitioned collaborative learning [124] White-Box High Offline Model Inspection 2…”
Section: Collaborative Learningmentioning
confidence: 99%
“…Application of federated learning in the field of AIOps is mainly focused around anomaly [17,18,19] and intrusion detection [20,21]. Liu et al [19,18] propose a deep time series anomaly detection model which is trained locally on IoT devices via federated learning.…”
Section: Related Workmentioning
confidence: 99%