Neural Networks for Identification, Prediction and Control 1995
DOI: 10.1007/978-1-4471-3244-8_3
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic System Identification Using Recurrent Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

1999
1999
2021
2021

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(30 citation statements)
references
References 5 publications
0
30
0
Order By: Relevance
“…Next, the activation of each hidden unit is copied into a corresponding context unit on a one-for-one basis with fixed weights of 1, and then the next time step is performed. This is equivalent to a recurrent connection from every hidden unit to itself and is more restrictive than the arbitrary recurrent connections allowed by Minsky's claim [31].…”
Section: Elman Networkmentioning
confidence: 99%
“…Next, the activation of each hidden unit is copied into a corresponding context unit on a one-for-one basis with fixed weights of 1, and then the next time step is performed. This is equivalent to a recurrent connection from every hidden unit to itself and is more restrictive than the arbitrary recurrent connections allowed by Minsky's claim [31].…”
Section: Elman Networkmentioning
confidence: 99%
“…Following by the literatures [17,18] and [20], we choose . Therefore, Equation (9) can be modified as…”
Section: A Description Of Hdnn Modelmentioning
confidence: 99%
“…HDNN can be easily realized by a Hopfield circuit and has the property of decreasing in energy by finite number of node-updating steps. In HDNN, the analysis of fundamental properties, stability, convergence and equilibrium for discrete and continuous systems were proposed in [17,18].…”
Section: Introductionmentioning
confidence: 99%
“…All algebraic (feedforward) NNs, FNs, and WNs suffer from some drawbacks. In non-linear system modeling, a taped-delay lines approach is required, resulting in the number of rules increasing exponentially, the number of parameters in the rules getting large (this is called as ''the curse of dimensionality''), a long computational time, easily being affected by external noise, and difficulty in obtaining an independent system simulator [32,45,52,54]. The major drawbacks in these architectures are the curse of dimensionality, such as the requirement of too many parameters in NNs, the use of large rule bases in FL, the large number of wavelets, and the long training times, etc.…”
Section: Introductionmentioning
confidence: 99%