2020
DOI: 10.1016/j.eswa.2019.112828
|View full text |Cite
|
Sign up to set email alerts
|

Comparing the effectiveness of deep feedforward neural networks and shallow architectures for predicting stock price indices

Abstract: Many existing learning algorithms suffer from limited architectural depth and the locality of estimators, making it difficult to generalize from the test set and providing inefficient and biased estimators. Deep architectures have been shown to appropriately learn correlation structures in time series data. This paper compares the effectiveness of a deep feedforward Neural Network (DNN) and shallow architectures (e.g., Support Vector Machine (SVM) and one-layer NN) when predicting a broad crosssection of stock… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 56 publications
(24 citation statements)
references
References 54 publications
0
22
0
Order By: Relevance
“…Tanh and ReLU are two commonly used activation functions. Because the combination of two functions can overcome their own deficiencies, they are thus adopted in this study with detailed introduction in Appendix A (Figure S1).…”
Section: Modelling Soil Cyclic Behaviour Using Lstmmentioning
confidence: 99%
“…Tanh and ReLU are two commonly used activation functions. Because the combination of two functions can overcome their own deficiencies, they are thus adopted in this study with detailed introduction in Appendix A (Figure S1).…”
Section: Modelling Soil Cyclic Behaviour Using Lstmmentioning
confidence: 99%
“…DMLP contains many hyperparameters, this paper uses grid research method to discover its optimal hyperparameters. Since relu function performs better than tanh function [34], this paper applies relu function as activation function. All the considered hyperparameters are presented in Table 1.…”
Section: A Portfolio Optimization Model Based On Dmlpmentioning
confidence: 99%
“…And, earlystopping technology is used to reduce overfitting problem. Since relu function outperforms tanh function [34], relu function is adopted as activation function. After many trial and error, hidden node is set to 5, hidden layer is set to 1, learning rate is set to 0.001, patient is set to 0, batch size is set to 100, dropout rate is set to 0.1, recurrent dropout rate is set to 0.2, optimizer is set to RMSProp.…”
Section: B Portfolio Optimization Model Based On Lstm Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Feedforward neural networks fundamentally consist of three layers, which are the input layer, hidden layer, and output layer. The FFNN model has one hidden layer with monotonically increasing differentiable functions, which has the ability to approximate the continuous function with the hidden layer [12]. The FFNN model used in this paper has two input neurons and a hidden layer with two nodes, which comprise of the data collected from the field, which are the determinants of the cost of fiber cable repairs.…”
Section: Feedforward Neural Networkmentioning
confidence: 99%