2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops) 2021
DOI: 10.1109/ivworkshops54471.2021.9669214
|View full text |Cite
|
Sign up to set email alerts
|

Cybersecurity Threats in Connected and Automated Vehicles based Federated Learning Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…The security issues posed by hostile nodes can delay the convergence process of a federated learning system and impact the accuracy of the trained model by a model poisoning process (Al Mallah et al, 2021). These federated learning deployment vulnerabilities in CAVs are explored by Al Mallah et al (2021), and a number of attack scenarios like misleading the model by continuously driving through the same street or forging multiple identities by a single node, or sending a model trained on false data leading to a model poisoning attack were presented. Moreover, Al Mallah et al (2021) adopted the FL protocol suggested by Bonawitz et al (2019) for mobile networks and discussed the following attacks. Standard falsified information attacks: In these types of attacks, incorrect information is forwarded by a hostile vehicle that enters and exits a specific zone swiftly and thus continually sends fabricated real‐time updates to the RSU.…”
Section: Cybersecurity In Federated Learning Enabled Cavsmentioning
confidence: 99%
See 2 more Smart Citations
“…The security issues posed by hostile nodes can delay the convergence process of a federated learning system and impact the accuracy of the trained model by a model poisoning process (Al Mallah et al, 2021). These federated learning deployment vulnerabilities in CAVs are explored by Al Mallah et al (2021), and a number of attack scenarios like misleading the model by continuously driving through the same street or forging multiple identities by a single node, or sending a model trained on false data leading to a model poisoning attack were presented. Moreover, Al Mallah et al (2021) adopted the FL protocol suggested by Bonawitz et al (2019) for mobile networks and discussed the following attacks. Standard falsified information attacks: In these types of attacks, incorrect information is forwarded by a hostile vehicle that enters and exits a specific zone swiftly and thus continually sends fabricated real‐time updates to the RSU.…”
Section: Cybersecurity In Federated Learning Enabled Cavsmentioning
confidence: 99%
“…These methods focus on such transformation that can convert adversarial examples back to clean images (Guo et al, 2017;Jin et al, 2019;Liao et al, 2018;Samangouei et al, 2018) 4.1 | Attacks on FL-enabled CAVs A federated learning system that claims better privacy can be considered secure if it can also cope with malicious nodes' potential attacks and other security issues. The security issues posed by hostile nodes can delay the convergence process of a federated learning system and impact the accuracy of the trained model by a model poisoning process (Al Mallah et al, 2021). These federated learning deployment vulnerabilities in CAVs are explored by Al Mallah et al (2021), and a number of attack scenarios like misleading the model by continuously driving through the same street or forging multiple identities by a single node, or sending a model trained on false data leading to a model poisoning attack were presented.…”
Section: Cybersecurity In Federated Learning Enabled Cavsmentioning
confidence: 99%
See 1 more Smart Citation