2021
DOI: 10.48550/arxiv.2105.09264
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robo-Advising: Enhancing Investment with Inverse Optimization and Deep Reinforcement Learning

Abstract: Machine Learning (ML) has been embraced as a powerful tool by the financial industry, with notable applications spreading in various domains including investment management. In this work, we propose a full-cycle data-driven investment roboadvising framework, consisting of two ML agents. The first agent, an inverse portfolio optimization agent, infers an investor's risk preference and expected return directly from historical allocation data using online inverse optimization. The second agent, a deep reinforceme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 41 publications
(71 reference statements)
0
4
0
Order By: Relevance
“…The authors proved that, with high probability, the proposed exploration-exploitation algorithm performs near optimally with the number of time steps depending polynomially on various model parameters. Wang and Yu (2021) proposed an investment robo-advising framework consisting of two agents. The first agent, an inverse portfolio optimization agent, infers an investor's risk preference and expected return directly from historical allocation data using online inverse optimization.…”
Section: Rl Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…The authors proved that, with high probability, the proposed exploration-exploitation algorithm performs near optimally with the number of time steps depending polynomially on various model parameters. Wang and Yu (2021) proposed an investment robo-advising framework consisting of two agents. The first agent, an inverse portfolio optimization agent, infers an investor's risk preference and expected return directly from historical allocation data using online inverse optimization.…”
Section: Rl Approachmentioning
confidence: 99%
“…As introduced in Section 4.6, Alsabah et al (2021) considered learning within a set of š‘š prespecified investment portfolios, and Wang and Yu (2021) and developed learning algorithms and procedures to infer risk preferences, respectively, under the framework of Markowitz mean-variance portfolio optimization. It would be interesting to consider a modelfree RL approach where the robo-advisor has the freedom to learn and improve decisions beyond a prespecified set of strategies or the Markowitz framework.…”
Section: Robo-advising In a Model-free Settingmentioning
confidence: 99%
“…We found very limited literature that focuses on wealth building over time by making marginally better daily buying decisions over a lifetime of investing (Wang & Yu, 2021;Philps et al, 2018). However, these models still rely on either a 12 month evaluation window to train agents or are more focused on risk profile assessment and portfolio balancing over time.…”
Section: Problem Statementmentioning
confidence: 99%
“…Robo-advising in a Model-free Setting. As introduced in Section 4.6, [9] considered learning within a set of m pre-specified investment portfolios, and [209] and [237] developed learning algorithms and procedures to infer risk preferences, respectively, under the framework of Markowitz mean-variance portfolio optimization. It would be interesting to consider a model-free RL approach where the roboadvisor has the freedom to learn and improve decisions beyond a pre-specified set of strategies or the Markowitz framework.…”
Section: Further Developments For Mathematical Finance and Reinforcem...mentioning
confidence: 99%