2023
DOI: 10.1109/tii.2022.3173996
|View full text |Cite
|
Sign up to set email alerts
|

Multitentacle Federated Learning Over Software-Defined Industrial Internet of Things Against Adaptive Poisoning Attacks

Abstract: While Federated learning (FL) is attractive for pulling privacy-preserving distributed training data, the credibility of participating clients and non-inspectable data pose new security threats, of which poisoning attacks are particularly rampant and hard to defend without compromising privacy, performance or other desirable properties of FL. To tackle this problem, we propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model to superv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(19 citation statements)
references
References 66 publications
0
19
0
Order By: Relevance
“…Finally, deep learning technology may challenge traditional trusted computing, where FPGAs are also involved on the application side. As future work, we consider exploring AI-based security technologies and how they match with the FPGA architecture [104].…”
Section: Discussion and Closing Thoughtsmentioning
confidence: 99%
“…Finally, deep learning technology may challenge traditional trusted computing, where FPGAs are also involved on the application side. As future work, we consider exploring AI-based security technologies and how they match with the FPGA architecture [104].…”
Section: Discussion and Closing Thoughtsmentioning
confidence: 99%
“…For example, one of the main threats that machine learning can induce in the systems is the denial of detection (DoD). The DoD can prevent machine learning from generating signals, for instance, from events, failures, and even cyberattacks using adversarial examples [239] and data poisoning [240]. Another threat that machine learning can induce is leaking sensitive information from the company or factory.…”
Section: F Machine Learningmentioning
confidence: 99%
“…Moreover Since the NDN caching mechanism stores the consumer-requested data (comprised of both sensitive and nonsensitive information) at multiple static and mobile entities distributed across the network, securing such data is highly essential to avoid privacy leakage, poisoning, or backdoor attacks. Federated learning-backed frameworks such as multi-tentacle federated learning (MTFL) [117] can be adapted to reduce the chances of aforementioned attacks and ensure the trustiness of stored data on static and mobile nodes across various network nodes.…”
Section: ) Federated Learning For Mobile Nodesmentioning
confidence: 99%