2018 IEEE Third International Conference on Data Science in Cyberspace (DSC) 2018
DOI: 10.1109/dsc.2018.00125
|View full text |Cite
|
Sign up to set email alerts
|

A PCA-Based Model to Predict Adversarial Examples on Q-Learning of Path Finding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
11
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(11 citation statements)
references
References 11 publications
0
11
0
Order By: Relevance
“…Their experiments prove success of adversaries even in black-box scenarios. Xiang et al [216] developed a PCA-based model for predicting adversarial examples in the context of Q-learning based path-finding. In another related work, Bai et al [217] also attacked the Deep Q Network (DQN) [218] for robotic path-finding in a white-box setup.…”
Section: B Reinforcement Learningmentioning
confidence: 99%
“…Their experiments prove success of adversaries even in black-box scenarios. Xiang et al [216] developed a PCA-based model for predicting adversarial examples in the context of Q-learning based path-finding. In another related work, Bai et al [217] also attacked the Deep Q Network (DQN) [218] for robotic path-finding in a white-box setup.…”
Section: B Reinforcement Learningmentioning
confidence: 99%
“…Other studies of adversarial attacks on the specific application of DRL for path-finding have also been conducted by (Xiang et al 2018) and (Bai et al 2018), which results in the RL agent failing to find a path to the goal or planning a path that is more costly.…”
Section: Adversarial Attacks On Rl Agentmentioning
confidence: 99%
“…FGSM (Goodfellow et al 2014a), SPA (Xiang et al 2018), WBA (Bai et al 2018), and CDG (Chen et al 2018b) belong to White-box attack, which have access to the details related to training algorithm and corresponding parameters of the target model. Meanwhile, the PIA (Behzadan and Munir 2017), STA (Lin et al 2017), EA (Lin et al 2017), and AVI (Liu et al 2017) are Black-box attacks, in which adversary has no idea of the details related to training algorithm and corresponding parameters of the model, for the threat model discussed in these literatures, authors assumed that the adversary has access to the training environment bat has no idea of the random initializations of the target policy, and additionally does not know what the learning algorithm is.…”
Section: Summary For Adversarial Attack In Reinforcement Learningmentioning
confidence: 99%
“…For instance, in the field of Atari game, Lin et al (2017) proposed a "strategicallytimed attack" whose adversarial example at each time step is computed independently of the adversarial examples at other time steps, instead of attacking a deep RL agent at every time step (see "Black-box attack" section). Moreover, in the terms of automatic path planning, Liu et al (2017), Xiang et al (2018), Bai et al (2018) and Chen et al (2018b) all proposed methods which can take adversarial attack on reinforcement learning algorithms (VIN (Tamar et al 2016), Q-Learning (Watkins and Dayan 1992), DQN (Mnih et al 2013), A3C (Mnih et al 2016)) under automatic path planning tasks (see "Defense technology against adversarial attack" section).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation