C!479-7/1c/ofl),_)4c,),rIo. 7cc © t97' 1EFfis the well known solution to the Wiener-Hopf equation ([1], [2]). It is well known that the time constants governing the learning curves of LMS adaptive filtersThe first term in (3), a transient, represents are determined by the eigenvalues of the input the decay of the old state; and the second term, correlation matrix. In this paper it is shown the growth of the new state w. Inasmuch as unthat for sufficiently long filters, a transforfavorable initial conditions may arbitrarily pronation to the frequency domain diagonalizes the long convergence, we focus on the growth state and matrix. As a consequence it is possible to obtaia assume i(o) = 0. It then follows from (3) and (4) a simple representation of the spectrum of the LMS that filter as a function of time (i.e., during convergence).The theoretical results are compared with w*_c(k) = 2ji 2 (I-2pR)3p = (I_2pRw*. (5) a computer simulation. j=k Let A be the eigenvalues and e' the corresponding I BACKGROUND eigenectors of R. Taking the vector norm of (5), we have Least mean square (U(S) adaptive filters are typically updated through the algorithm (6)where "" represents scalar multiplication. A sinc(k) = d(k) -2 w(k) x(k) ilar expression may be derived for the convergence of the mean square error (k)E(s2(k)) to its limiting value
It is easily shown ([2], [4])where v denotes the L-dimensional vector of filter that in the absence of gradient noise (which is of weights, x the reference input vector, p the feedorder Lp*) back constant, and d the primary input or desired L response.In the case of the adaptive lime em-(k)* = 2 (l_2pAv)21\jev.w*j2 (7) hancer (ALE), the components of x are delayed samples of d. A detailed exposition of these algo-The curves (6) and (7) as a function of k may be termed the learning curves of the filter and Let = E(w) be the expected value of the output respectively. Each of the L terms relaxes weight vector; = E(dx); and R(k_i)RkCE(xkxf) geometrically with a logarithmic slope of be the correlation matrix of the input. Them, for log (l-2pAv)2 -4pXv, thus exhibiting a time constationary inputs, the recursion equation for the stant of mean of w is given approximately (R and w are as-(8) sumed independent, cf [i], [2])by i(k+1) = (I-2pR)(k) + 2ppWe remark that, generally, the maximum time constant, l/4PAmin ,ia a very conservative estimate It is easily seen by direct substitution that for the behavior of the learning curves.