2009 International Conference on Information Technology and Computer Science 2009
DOI: 10.1109/itcs.2009.20
View full text |Buy / Rent full text
|
Sign up to set email alerts
|
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 18 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…The correlation between the input u ( k ) and output y ( k ) in the MLP network can be written mathematically as equations (14) and (15) 53 where x ( k ) indicates the output vector from the hidden layer, and w 2 and w 1 , respectively, represent the connection weight matrixes from the hidden layer to the output layer and from the input layer to the hidden layer. Moreover, b 1 and b 2 represent bias numbers in the input and output layers, respectively, and f 1 (.)…”
Section: Bp Neural Networkmentioning
confidence: 99%
“…The correlation between the input u ( k ) and output y ( k ) in the MLP network can be written mathematically as equations (14) and (15) 53 where x ( k ) indicates the output vector from the hidden layer, and w 2 and w 1 , respectively, represent the connection weight matrixes from the hidden layer to the output layer and from the input layer to the hidden layer. Moreover, b 1 and b 2 represent bias numbers in the input and output layers, respectively, and f 1 (.)…”
Section: Bp Neural Networkmentioning
confidence: 99%
“…To be specific, the MATLAB neural network toolbox provides an Elmannet function, and Elman network construction can be completed by setting three parameters in the Elmannet function, which are the delay time, the number of hidden layer neurons, and the training function, respectively. In this case, the number of hidden-layer neurons is set to be 18, and TRAINGDX is chosen to be the training function [20][21][22]. TRAINGDX, which is named gradient descent with momentum and adaptive learning rate backpropagation, is a network training function that updates weight and bias values according to gradient descent momentum and an adaptive learning rate.…”
Section: Construction Of Elman Neuralmentioning
confidence: 99%