2022
DOI: 10.3390/axioms11080375
|View full text |Cite
|
Sign up to set email alerts
|

Forecasting Crude Oil Prices with Major S&P 500 Stock Prices: Deep Learning, Gaussian Process, and Vine Copula

Abstract: This paper introduces methodologies in forecasting oil prices (Brent and WTI) with multivariate time series of major S&P 500 stock prices using Gaussian process modeling, deep learning, and vine copula regression. We also apply Bayesian variable selection and nonlinear principal component analysis (NLPCA) for data dimension reduction. With a reduced number of important covariates, we also forecast oil prices (Brent and WTI) with multivariate time series of major S&P 500 stock prices using Gaussian proc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 25 publications
(31 reference statements)
0
1
0
Order By: Relevance
“…Additionally, the application of task-specific learning rate schedules has been recommended to tailor the fine-tuning process to the specific characteristics of medical datasets [10,17,32,33]. The importance of hyperparameter optimization through automated search techniques, such as grid search and Bayesian optimization, has been underscored, highlighting their role in identifying the most effective fine-tuning configurations [34,35,36]. These findings collectively emphasize the nuanced impact of hyperparameters on fine-tuning success, suggesting the need for tailored approaches based on task specificity and dataset characteristics.…”
Section: Impact Of Hyperparameters On Fine-tuningmentioning
confidence: 99%
“…Additionally, the application of task-specific learning rate schedules has been recommended to tailor the fine-tuning process to the specific characteristics of medical datasets [10,17,32,33]. The importance of hyperparameter optimization through automated search techniques, such as grid search and Bayesian optimization, has been underscored, highlighting their role in identifying the most effective fine-tuning configurations [34,35,36]. These findings collectively emphasize the nuanced impact of hyperparameters on fine-tuning success, suggesting the need for tailored approaches based on task specificity and dataset characteristics.…”
Section: Impact Of Hyperparameters On Fine-tuningmentioning
confidence: 99%
“…It accomplishes this by predicting masked words via masked language modeling and by learning relationships between two sentences of the next sentence prediction task. And fine-tuning is the process of adapting the pre-trained BERT model to a particular task [54].…”
Section: ) Bertmentioning
confidence: 99%
“…With regards to the extent of prediction errors, the vine copula regression with NLPCA was generally superior to the other proposed approaches. To reduce the dimensions of the data, a Bayesian variable selection and nonlinear principal component analysis (NLPCA) were used [17].…”
Section: Literary Surveymentioning
confidence: 99%