The primary difficulty in the identification of Hammerstein nonlinear systems (a static memoryless nonlinear system in series with a dynamic linear system) is that the output of the nonlinear system (input to the linear system) is unknown. By employing the theory of affine projection, we propose a gradient-based adaptive Hammerstein algorithm with variable step-size which estimates the Hammerstein nonlinear system parameters. The adaptive Hammerstein nonlinear system parameter estimation algorithm proposed is accomplished without linearizing the systems nonlinearity. To reduce the effects of eigenvalue spread as a result of the Hammerstein system nonlinearity, a new criterion that provides a measure of how close the Hammerstein filter is to optimum performance was used to update the step-size. Experimental results are presented to validate our proposed variable step-size adaptive Hammerstein algorithm given a real life system and a hypothetical case.
It is well known that the Least Mean Square (LMS) that at steady state, the input regression matrix is statistically algorithm convergence speed degrades considerably when the independent of the a priori error vector which in turn is a input signal is correlated. On the other hand, the Affine modified independence assumption on the input regression Projection Algorithm (APA) was recently developed and has matrix.faster convergence for correlated inputs compared to LMS.Convergence Analysis done on APA to date has been based onIn this work, we give a strict mathematical proof of the either a modification of the independence assumption, a special convergence of APA based on a realistic assumption of finite regression model, or a Gaussian regression data model. In this strong memory and finite moments for the observations, paper, an analysis of the standard APA algorithm under the which has not been done for this algorithm. We show that assumption of a finite strong memory and finite moments for under steady state conditions, the weight error correlation is the regression data is done. We prove that under steady state bounded and dependent on the step size and not the conditions, the weight error covariance is lower bounded and correlation of the input regression matrix. dependent on the step size and not the correlation of the input Our analysis is similar to previous concepts of Mregression matrix.Oraayi ssmlrt rvoscnet fM independence arguing that the mean square deviation of LMS algorithm is actually of the same order as the step size I. INTRODUCTION of the algorithm [3,11,12]. Our analysis is comparable to and The Least Mean Square (LMS) adaptive algorithm [1] is inspired by the second order convergence analysis of the a very well known and extensively studied algorithm. A lot LMS adaptive algorithm in [3,11]. has been written on its convergence and performance This paper is organized as follows. Section II presents the properties through the decades [2][3][4][5][6]. A continuing problem weight update equation of APA; III, analyses the with LMS however, is slow convergence for colored input convergence behavior of APA; Section IV concludes with a signals. To solve this problem, the Affine Projection summary of the theoretical results obtained. Algorithm (APA) was developed [7]. This in turn led to the development of many variants of the Affine Projection Algorithms that differ mainly by the method of weight II. AFFINE PROJECTIONALGORITHM
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.