2020
DOI: 10.1609/aaai.v34i02.5587
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Quantitative Trading: An Imitative Deep Reinforcement Learning Approach

Abstract: In recent years, considerable efforts have been devoted to developing AI techniques for finance research and applications. For instance, AI techniques (e.g., machine learning) can help traders in quantitative trading (QT) by automating two tasks: market condition recognition and trading strategies execution. However, existing methods in QT face challenges such as representing noisy high-frequent financial data and finding the balance between exploration and exploitation of the trading agent with AI techniques.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
46
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 80 publications
(51 citation statements)
references
References 12 publications
0
46
0
Order By: Relevance
“…Furthermore, the implementation of the RDPG model for different applications, which require a more rapid and accurate response compared to the forecasting, verifies the suitability of the RDPG. Consequently, the RDPG was applied to control traffic lights in transportation [73], learn adaptive behavior in driving [74], perform adaptive trading for different markets [75], and to reduce errors of robot positions and joint torques [42]. These applications of the RDPG algorithm signify that the RDPG based ISVR model is feasible for both offline and real-time applications in any type of complex environment.…”
Section: Results and Comparative Analysismentioning
confidence: 99%
“…Furthermore, the implementation of the RDPG model for different applications, which require a more rapid and accurate response compared to the forecasting, verifies the suitability of the RDPG. Consequently, the RDPG was applied to control traffic lights in transportation [73], learn adaptive behavior in driving [74], perform adaptive trading for different markets [75], and to reduce errors of robot positions and joint torques [42]. These applications of the RDPG algorithm signify that the RDPG based ISVR model is feasible for both offline and real-time applications in any type of complex environment.…”
Section: Results and Comparative Analysismentioning
confidence: 99%
“…Secondly, open high low prices tend to be highly correlated creating some noise in the inputs. Third, the concept of volatility is crucial to detect regime change and is surprisingly absent from these works as well as from other works like (Yu et al 2019;Wang and Zhou 2019;Liu et al 2020;Ye et al 2020;Li et al 2019;Xiong et al 2019).…”
Section: Observationsmentioning
confidence: 98%
“…Third, there is no consideration of online learning to adapt to changing environment as well as the incorporation of transaction costs. A second stream of research around deep reinforcement learning has emerged to address these points (Jiang and Liang 2016;Jiang, Xu, and Liang 2017;Liang et al 2018;Yu et al 2019;Wang and Zhou 2019;Liu et al 2020;Ye et al 2020;Li et al 2019;Xiong et al 2019;Benhamou et al 2020a;2020b). The dynamic nature of reinforcement learning makes it an obvious candidate for changing environment (Jiang and Liang 2016;Jiang, Xu, and Liang 2017;Liang et al 2018).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations