Proceedings of the First ACM International Conference on AI in Finance 2020
DOI: 10.1145/3383455.3422519
|View full text |Cite
|
Sign up to set email alerts
|

Risk-sensitive reinforcement learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…Risk-sensitive RL: In the field of risk-sensitive RL, traditional research has focused on optimizing RL agents for specific risk measures (Howard and Matheson 1972;Sato, Kimura, and Kobayashi 2001;Mihatsch and Neuneier 2002;Tamar, Glassner, and Mannor 2015;Chow et al 2017;Dabney et al 2018;Vadori et al 2020). Early efforts include optimizing an instance of WV@R like worst-case (Mihatsch and Neuneier 2002) or CV@R (Tamar, Glassner, and Mannor 2015;Chow et al 2017;Dabney et al 2018).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Risk-sensitive RL: In the field of risk-sensitive RL, traditional research has focused on optimizing RL agents for specific risk measures (Howard and Matheson 1972;Sato, Kimura, and Kobayashi 2001;Mihatsch and Neuneier 2002;Tamar, Glassner, and Mannor 2015;Chow et al 2017;Dabney et al 2018;Vadori et al 2020). Early efforts include optimizing an instance of WV@R like worst-case (Mihatsch and Neuneier 2002) or CV@R (Tamar, Glassner, and Mannor 2015;Chow et al 2017;Dabney et al 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Using the normalizing flow, we also addressed the crossing quantile problem. Our framework focuses on WV@R, while not encompassing all risk measures, such as CMV (Vadori et al 2020) or variance, but can cover a wide range of risk measures applicable to real-world problems.…”
Section: Conclusion and Limitationmentioning
confidence: 99%
“…It implements DRL (Deep Reinforcement Learning) algorithms to simulate a wide array of markets as well as trading constraints to decide where to trade, at what price and what quantity [91]. DRL addresses the dynamic decision-making problems by offering portfolio scalability and market model independence [92][93][94][95][96]. This gives it a competitive edge over human traders [97][98].…”
Section: Trading and Financementioning
confidence: 99%
“…Vadori et al. (2020) developed a martingale approach to learn policies that are sensitive to the uncertainty of the rewards and are meaningful under some market scenarios. Another line of work focuses on constrained RL problems with different risk criteria (Achiam et al., 2017; Chow et al., 2017, 2015; Ding et al., 2021; Tamar et al., 2015; Zheng & Ratliff, 2020).…”
Section: Further Developments For Mathematical Finance and Reinforcem...mentioning
confidence: 99%