2022
DOI: 10.3390/fi14090252
|View full text |Cite
|
Sign up to set email alerts
|

Forecasting the Risk Factor of Frontier Markets: A Novel Stacking Ensemble of Neural Network Approach

Abstract: Forecasting the risk factor of the financial frontier markets has always been a very challenging task. Unlike an emerging market, a frontier market has a missing parameter named “volatility”, which indicates the market’s risk and as a result of the absence of this missing parameter and the lack of proper prediction, it has almost become difficult for direct customers to invest money in frontier markets. However, the noises, seasonality, random spikes and trends of the time-series datasets make it even more com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

3
6

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 64 publications
0
4
0
Order By: Relevance
“…This algorithm performs well in balancing individual regression vulnerabilities. Stacking regressor is an ensemble learning technique to combine multiple regression models via a meta-regressor [73][74][75][76]. SR is capable of stacking the output of each individual estimator and computing the final prediction by using each estimator's output as an input for a final estimator.…”
Section: Machine Learning Regressorsmentioning
confidence: 99%
“…This algorithm performs well in balancing individual regression vulnerabilities. Stacking regressor is an ensemble learning technique to combine multiple regression models via a meta-regressor [73][74][75][76]. SR is capable of stacking the output of each individual estimator and computing the final prediction by using each estimator's output as an input for a final estimator.…”
Section: Machine Learning Regressorsmentioning
confidence: 99%
“…The final fully connected layers use the softmax activation function, which gives the probability of input being in a particular class. Finally, we trained the model on 9617 sample images for 30 epochs with a learning rate of 0.0001 and Adam optimizer [34,35].…”
Section: Classification Models and Fine Tuningmentioning
confidence: 99%
“…All of the models are individually applied on English, Bangla, and Hindi datasets. For training the models, the dataset has been split into two parts: training layer [27,28]. Figure 3 depicts the LSTM's structural layout.…”
Section: E Classification Modelsmentioning
confidence: 99%