2020
DOI: 10.1109/tvt.2020.2977378
|View full text |Cite
|
Sign up to set email alerts
|

Poisoning and Evasion Attacks Against Deep Learning Algorithms in Autonomous Vehicles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(25 citation statements)
references
References 39 publications
0
25
0
Order By: Relevance
“…• AI/ML attack: Attack on AI algorithms or machine learning (ML) models can be triggered in several ways: 1) manipulations of traffic signs to deceive traffic sign recognition of CAVs [42], 2) data falsification, e.g., GPS locations [43], or 3) false driving maneuver signals to mislead models for misclassifying an input [44]. Also, a poisoning attack can reduce the prediction accuracy of the learned model by injecting malicious samples in the dataset that are used to train models [45], [46]. • Social engineering attack: The adversary manipulates users to make security mistakes or give away sensitive information that can be used for breaching the authentication or access control mechanisms.…”
Section: Software-related Attacksmentioning
confidence: 99%
“…• AI/ML attack: Attack on AI algorithms or machine learning (ML) models can be triggered in several ways: 1) manipulations of traffic signs to deceive traffic sign recognition of CAVs [42], 2) data falsification, e.g., GPS locations [43], or 3) false driving maneuver signals to mislead models for misclassifying an input [44]. Also, a poisoning attack can reduce the prediction accuracy of the learned model by injecting malicious samples in the dataset that are used to train models [45], [46]. • Social engineering attack: The adversary manipulates users to make security mistakes or give away sensitive information that can be used for breaching the authentication or access control mechanisms.…”
Section: Software-related Attacksmentioning
confidence: 99%
“…The evasion and the poisoning threat in the AV were demonstrated by Jiang et al (2020). Here two various attacks were taken into consideration.…”
Section: Review Of Related Workmentioning
confidence: 99%
“…Such attacks can be possibly initiated against deep learning models that require enthusiastically modernize their training data and learning parameters to cope with features of new attacks [62]. Second, the evasion attack in the basis of generating adversarial observations by adapting the attack structures to be somewhat dissimilar from the malevolent observations employed to train deep learning model; therefore, the possibility of detecting the attack get reduced, and even the attack evades the discovery, thereby dipping the performance of the system curiously [63]. Third, the impersonation attack, which struggles to simulate the original data observations to betray the deep learning models to categorize the original observations with various incorrect labels from the impersonated ones [64].…”
Section: Interdependent Interrelated and Collaborative Ecosystemsmentioning
confidence: 99%