2022
DOI: 10.1609/aaai.v36i4.20369
|View full text |Cite
|
Sign up to set email alerts
|

FactorVAE: A Probabilistic Dynamic Factor Model Based on Variational Autoencoder for Predicting Cross-Sectional Stock Returns

Abstract: As an asset pricing model in economics and finance, factor model has been widely used in quantitative investment. Towards building more effective factor models, recent years have witnessed the paradigm shift from linear models to more flexible nonlinear data-driven machine learning models. However, due to low signal-to-noise ratio of the financial data, it is quite challenging to learn effective factor models. In this paper, we propose a novel factor model, FactorVAE, as a probabilistic model with inherent ran… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…By incorporating classification loss into the training objective of the autoencoder, e derives a latent space that emphasizes a robust contextual representation. While e and r together constitute an autoencoder that transform X e into the latent X h , considering that X e is a 2D return of the Low feature, the autoencoder can be considered as a factor model (Duan et al 2022).…”
Section: Pre-training Stagementioning
confidence: 99%
“…By incorporating classification loss into the training objective of the autoencoder, e derives a latent space that emphasizes a robust contextual representation. While e and r together constitute an autoencoder that transform X e into the latent X h , considering that X e is a 2D return of the Low feature, the autoencoder can be considered as a factor model (Duan et al 2022).…”
Section: Pre-training Stagementioning
confidence: 99%
“…Target features were generated by extracting high-dimensional abstract representations in the latent space. Existing disentangled learning methods such as β-VAE [16], factor-VAE [17] and Info-GAN [18] are combined with other existing deep learning methods. Since disentangled Representation Learning is still in its infancy, there are no better definitions and metrics.…”
Section: Disentangled Representation Learningmentioning
confidence: 99%
“…We propose to use variational inference to draw the distribution of hidden factors of time series and learn the relations between these factors to generate and predict the time series. Duan et al [8] leveraged VAE to learn the multi-dynamic-factor model [42] for predicting cross-sectional stock return, which means they ignored the temporal variation of the factors. Differently, our method takes the temporal relations into account for more comprehensive modeling and adapts the classic theory to be more powerful with fewer constraints by means of modern deep learning techniques.…”
Section: Time Series Forecastingmentioning
confidence: 99%
“…Prior work has explored the use of factors and their relations to predict time series. It is achieved by either explicitly exploiting additional knowledge [4], [5], [6], [7] or discovering hidden representations from the historical data of time series [8]. These approaches may have limited applicability and are only feasible for tasks in certain domains with the need for a sophisticated process of domain knowledge.…”
Section: Introductionmentioning
confidence: 99%