1989
DOI: 10.1109/31.31337
|View full text |Cite
|
Sign up to set email alerts
|

A new convergence factor for adaptive filters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
21
0

Year Published

1994
1994
2021
2021

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 64 publications
(24 citation statements)
references
References 1 publication
0
21
0
Order By: Relevance
“…In the first case the step-size matrix is again a diagonal matrix with the upper bound on each of the diagonal elements, as in Eq. (16). The individual step sizes are calculated as: where ρ is a small positive value allowing to control the convergence process.…”
Section: Vs-lms Algorithm By Mathews Et Almentioning
confidence: 99%
See 2 more Smart Citations
“…In the first case the step-size matrix is again a diagonal matrix with the upper bound on each of the diagonal elements, as in Eq. (16). The individual step sizes are calculated as: where ρ is a small positive value allowing to control the convergence process.…”
Section: Vs-lms Algorithm By Mathews Et Almentioning
confidence: 99%
“…24 Next follows the group containing only one parameter to adjust, and this group contains only the NLMS algorithm. 5 Very large is the group of the algorithms needing an upper bound for the step size in addition to one or two parameters; this group includes Shan's algorithms 15 (one of them known as the correlation-LMS), Karni's algorithm, 16 Benveniste's algorithms, 17 and Mathews' algorithms. 19 Three parameters, but without an upper bound for the step size, are also required by Benesty's algorithm.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In these procedures the main point is the filter tap coefficients are not regular, slightly, they change from one iteration to another, in compliance with the noise corruption left over in the output signal. Here are numerous filtering algorithms existing to fill in the coefficients [7] - [9]. Related to all algorithms the Least Mean Square (LMS) technique is the essential technique.…”
Section: Introductionmentioning
confidence: 99%
“…The selection of the stepsizes is based on different criteria such as the magnitude of the estimation error (Kwong, 1986), polarity of the successive samples of estimation error (Harris, Chabries and Bishop, 1986) and the cross-correlation of the estimation error with input data (Karni & Zeng, 1989;Shan & Kailath, 1988). Mikhael et al (1986) proposes methods that give the fastest speed of convergence in an attempt to minimize the squared estimation error, but at the expense of large misadjustments in steady state.…”
mentioning
confidence: 99%