2012
DOI: 10.1007/978-81-322-0491-6_28
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Prediction of Stock Market Indices Using Adaptive Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…The complexity of financial series has been widely acknowledged, with Bebarta et al (2012) and S. P. Das and Padhy (2018) both noting the challenge it presents. Cao and Tay (2001) developed a seminal method for applying computational methods with nonlinear characteristics to predict future contracts' returns, outlining the advantages and disadvantages of the models used.…”
Section: Related Literaturementioning
confidence: 99%
See 1 more Smart Citation
“…The complexity of financial series has been widely acknowledged, with Bebarta et al (2012) and S. P. Das and Padhy (2018) both noting the challenge it presents. Cao and Tay (2001) developed a seminal method for applying computational methods with nonlinear characteristics to predict future contracts' returns, outlining the advantages and disadvantages of the models used.…”
Section: Related Literaturementioning
confidence: 99%
“…With the increasing participation of individuals in the financial market, new techniques, perspectives, and scientific research are continually emerging, providing renewed inspiration for both innovators and traditional stakeholders (investors, analysts, banks, managers, regulatory agencies, etc.). It is widely acknowledged that stocks are a high‐risk (Vui et al, 2013) and relatively appealing asset class to predict (Bebarta et al, 2012; S. P. Das & Padhy 2018), given their varying expressions of the return‐liquidity‐volatility triad.…”
Section: Introductionmentioning
confidence: 99%
“…where a j represents the output of the previous layer neuron, W ij is the weight between the ith an d jth neuron, and W io is the input bias of this neuron. In this work, the MLP network is trained using the back propagation method, and the detailed explanation is presented in [26,27].…”
Section: Bayes and Naive Bayes Classifiersmentioning
confidence: 99%