2000
DOI: 10.1007/978-1-4471-0785-9
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Control with Recurrent High-order Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
187
0
5

Year Published

2005
2005
2021
2021

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 275 publications
(210 citation statements)
references
References 0 publications
0
187
0
5
Order By: Relevance
“…10,30 RHONNs are the result of including high-order interactions represented by triplets (y i y j y k ), quadruplets (y i y j y k y l ) and so on to the first-order Hopfield model. 10,30 The RHONN model used in this work is the seriesparallel model model, 10 which is defined aŝ…”
Section: Rhonnmentioning
confidence: 99%
“…10,30 RHONNs are the result of including high-order interactions represented by triplets (y i y j y k ), quadruplets (y i y j y k y l ) and so on to the first-order Hopfield model. 10,30 The RHONN model used in this work is the seriesparallel model model, 10 which is defined aŝ…”
Section: Rhonnmentioning
confidence: 99%
“…, p is a positive integer which denotes the number of external inputs, L denotes the neural network (NN) node number,  is an artificial quantity required only for analytical purposes [14,16]. In general, it is assumed that there exists an unknown but constant weight vector , w  whose estimate is L w   .…”
Section: Discrete-time Honnsmentioning
confidence: 99%
“…For the case where the modelling error is not zero [Rovithakis, 2000], the solutions of differential equations (13) may become unbounded, even if the modelling error is bounded. Therefore, the learning law (13) has to be modified in order to avoid the parameter drift problem.…”
Section: Robust Updating Weight Lawmentioning
confidence: 99%