2018
DOI: 10.1007/978-3-319-89500-0_33
|View full text |Cite
|
Sign up to set email alerts
|

A Method to Effectively Detect Vulnerabilities on Path Planning of VIN

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 11 publications
0
10
0
Order By: Relevance
“…For example, [220] showed successful transferability of attacks across different DQN models. Additional early examples of black-box attacks on RL include [221] and [222].…”
Section: B Reinforcement Learningmentioning
confidence: 99%
“…For example, [220] showed successful transferability of attacks across different DQN models. Additional early examples of black-box attacks on RL include [221] and [222].…”
Section: B Reinforcement Learningmentioning
confidence: 99%
“…The main contribution for Liu et al (2017) is that they proposed a method for detecting potential attack which can obstruct VIN effectiveness. They built a 2D navigation task demonstrate VIN and studied how to add obstacles to effectively affect VIN's performance and propose a general method suitable for different kinds of environment.…”
Section: Adversarial Attack On Vin (Avi)mentioning
confidence: 99%
“…FGSM (Goodfellow et al 2014a), SPA (Xiang et al 2018), WBA (Bai et al 2018), and CDG (Chen et al 2018b) belong to White-box attack, which have access to the details related to training algorithm and corresponding parameters of the target model. Meanwhile, the PIA (Behzadan and Munir 2017), STA (Lin et al 2017), EA (Lin et al 2017), and AVI (Liu et al 2017) are Black-box attacks, in which adversary has no idea of the details related to training algorithm and corresponding parameters of the model, for the threat model discussed in these literatures, authors assumed that the adversary has access to the training environment bat has no idea of the random initializations of the target policy, and additionally does not know what the learning algorithm is.…”
Section: Summary For Adversarial Attack In Reinforcement Learningmentioning
confidence: 99%
See 2 more Smart Citations