1999
DOI: 10.1109/78.747775
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic analysis of gradient adaptive identification of nonlinear systems with memory for Gaussian data and noisy input and output measurements

Abstract: This paper analyzes the statistical behavior of a sequential gradient search adaptive algorithm for identifying an unknown nonlinear system comprised of a discrete-time linear system H followed by a zero-memory nonlinearity g(.). The LMS algorithm tirst estimates H. The weights are then frozen. Recursions are derived for the mean and fluctuation behavior of LMS which agree with Monte Carlo simulations. When the nonlinearity is modelled by a scaled error function, the second part of the gradient scheme is shown… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0
9

Year Published

2004
2004
2014
2014

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 56 publications
(52 citation statements)
references
References 17 publications
0
43
0
9
Order By: Relevance
“…The calculations are similar to [9] Appendix, the main difference is that here we deal with a multi-dimensional input. Therefore, we will follow the same methodology as in [9].…”
Section: Appendix IImentioning
confidence: 99%
See 2 more Smart Citations
“…The calculations are similar to [9] Appendix, the main difference is that here we deal with a multi-dimensional input. Therefore, we will follow the same methodology as in [9].…”
Section: Appendix IImentioning
confidence: 99%
“…Several authors have analyzed NN algorithms during the last two decades which considerably helped the neural network community to better understand the mechanisms of neural networks [1,[7][8][9][10][11][12][13][14][15]. For example, the authors in [13] have studied a simple structure consisting of two inputs and a single neuron.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…We have (24) Taking the expectation, defining , and neglecting the statistical dependence of and , we obtain (25) This implies that where denotes the vector whose th entry is defined by . Using these results with (22) yields the following mean weight-error vector update equation: (26) This equation requires second-order moments defined by the matrix in order to update the first-order one provided by .…”
Section: A Mean Weight Behavior Modelmentioning
confidence: 99%
“…(7)- (9)], using results from [25]. Following the same procedure as in [24] yields (63) Now, taking the expected value with respect to (64)…”
Section: A4mentioning
confidence: 99%