2021
DOI: 10.1109/access.2021.3068768
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks Against Reinforcement Learning-Based Portfolio Management Strategy

Abstract: Many researchers have incorporated deep neural networks (DNNs) with reinforcement learning (RL) in automatic trading systems. However, such methods result in complicated algorithmic trading models with several defects, especially when a DNN model is vulnerable to malicious adversarial samples. Researches have rarely focused on planning for long-term attacks against RL-based trading systems. To neutralize these attacks, researchers must consider generating imperceptible perturbations while simultaneously reduci… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 25 publications
0
4
0
Order By: Relevance
“…This focuses on the area includes the robustification of Deep RL algorithms against poisoning and evasion attacks that aim to attack the algorithmic functionalities of the automated agent. While very low number of publications have focused on such attacks for Deep RL algorithms, it is evident that the future cyber attackers will implement such attacks in the future through Deep RL and neural network based research within other domains [14,26,108,130]. If this challenge is not addressed, future networked systems could be vulnerable to algorithmic attacks that could potentially take control of the automated blue agent, and eventually the entire network.…”
Section: Challenges and Their Importancementioning
confidence: 99%
“…This focuses on the area includes the robustification of Deep RL algorithms against poisoning and evasion attacks that aim to attack the algorithmic functionalities of the automated agent. While very low number of publications have focused on such attacks for Deep RL algorithms, it is evident that the future cyber attackers will implement such attacks in the future through Deep RL and neural network based research within other domains [14,26,108,130]. If this challenge is not addressed, future networked systems could be vulnerable to algorithmic attacks that could potentially take control of the automated blue agent, and eventually the entire network.…”
Section: Challenges and Their Importancementioning
confidence: 99%
“…Hence, it is imperative to study RL under an adversarial environment. In the last five years, there is a surge in terms of the number of papers that studies the security issues of RL [104,105,106,107,108,109,110,111,112,113,114,115,116,117]. The vulnerabilities of RL comes from the information exchange between the agent and the environment.…”
Section: Reinforcement Learning In Adversarial Environmentmentioning
confidence: 99%
“…Without consistent or accurate feedback from the environment, the agent can either fail to learn an implementable control policy or be tricked into a 'nefarious' control policy favored by the adversaries. In the last five years, there is a surge in the number of research studies that focus on the security threats faced by RL with discrete state space [13], [16]- [23]. Very few if not none of these studies have investigated the security threats to RL-enabled control systems, which often comes with continuous state and action spaces.…”
Section: Introductionmentioning
confidence: 99%
“…Related works: There is a recent trend in studying security threats faced by RL algorithms [13], [16]- [23], [28]. These studies approach the security threats faced by Rl algorithms through two different vulnerable targets: attacks on the reward or cost signals [13], [18], [19], [21], [23], attacks on the state sensing [15], [20], [22].…”
Section: Introductionmentioning
confidence: 99%