2014
DOI: 10.1080/08839514.2014.862771
|View full text |Cite
|
Sign up to set email alerts
|

Principal Component Analysis (Pca) for Estimating Chlorophyll Concentration Using Forward and Generalized Regression Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(7 citation statements)
references
References 16 publications
0
7
0
Order By: Relevance
“…Using Principal Components Analysis (PCA) the selection of ANN’s input parameters is done [ 35 , 36 ]. PCA is often combined with ANN modelling, because by that way dimension reduction is enabled and the model’s computational complexity is reduced, so the possibility of a model’s misconvergence and poor accuracy is eliminated [ 37 ].…”
Section: Methodsmentioning
confidence: 99%
“…Using Principal Components Analysis (PCA) the selection of ANN’s input parameters is done [ 35 , 36 ]. PCA is often combined with ANN modelling, because by that way dimension reduction is enabled and the model’s computational complexity is reduced, so the possibility of a model’s misconvergence and poor accuracy is eliminated [ 37 ].…”
Section: Methodsmentioning
confidence: 99%
“…The generalized regression neural network (GRNN), as a modified form of the RBFNN, was proposed by Specht (1991). The GRNN approximates any arbitrary function between the input and output variables, drawing the function estimate directly from the training data (Zounemat-Kermani, 2014). Like the RBFNN, this network does not require an iterative procedure.…”
Section: Generalized Regression Neural Networkmentioning
confidence: 99%
“…The number of the rest is 500, defined as the testing set. We preset the learning rate and the maximum training cycle by referring to [ 21 , 29 , 30 ]; then we have done the experiment repeatedly on the training set of different index data; different numbers of neural nodes in the hidden layer are chosen as the optimal; see Table 1 . The maximum training iterations number K is 300, different dataset has different learning rate η , and, here, after many times experiments of the training data we choose 0.001, 0.001, 0.05, and 0.01 for SSE, TWSE, KOSPI, and Nikkei225, respectively.…”
Section: Forecasting and Statistical Analysis Of Stock Pricementioning
confidence: 99%
“…r” stands for learning rate. The hidden number is also chosen by referring to [ 21 , 29 , 30 ]. The experiments have been done repeatedly to determine hidden nodes and training cycle in the training process.…”
Section: Forecasting and Statistical Analysis Of Stock Pricementioning
confidence: 99%