1992
DOI: 10.1109/78.143437
|View full text |Cite
|
Sign up to set email alerts
|

A fast quasi-Newton adaptive filtering algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

1993
1993
2013
2013

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…In this case, the iteration number to converge with ε affects the complexity of the Kalman filter based on quasi-Newton methods. In [4]- [6], we can see the efficiency of the quasi-Newton methods for Kalman filter algorithm.…”
Section: Simulation Environment and Expected Resultsmentioning
confidence: 99%
“…In this case, the iteration number to converge with ε affects the complexity of the Kalman filter based on quasi-Newton methods. In [4]- [6], we can see the efficiency of the quasi-Newton methods for Kalman filter algorithm.…”
Section: Simulation Environment and Expected Resultsmentioning
confidence: 99%
“…To derive this relation, consider the a posteriori error vector (10) and substitute , which is obtained from (6). After some algebra, the desired relation is obtained as (11) Therefore, the a posteriori errors are identically zero if , which underscores the optimality of the least squares solution. This optimality may not be desirable in some applications, where large observation noise is present, as, for example, in acoustic echo cancellation in a hands-free telephone at low signal-to-ratio (SNR) levels [23].…”
Section: B Properties Of the Urls Algorithmmentioning
confidence: 99%
“…The fast conjugate gradient algorithm [10] is a suboptimal approximation to it. The suboptimal approximation of the RLS algorithms (the fast quasi Newton (FQN) algorithm [11]) updates the covariance matrix of the input in every instants by assuming that the covariance matrix changes slowly with time.…”
mentioning
confidence: 99%
“…Typically, we choose q much smaller than N . As in [7], the filter vector ~( n ) is computed only every N time steps . Thus the complexity of Algorithm 3 averages out to O(q log N) operations per step n, where q is a small integer.…”
Section: Commentsmentioning
confidence: 99%
“…Comparisons of this proposed FFr-based RLS approach with the standard, O ( N 2 ) , RLS updating and downdating methods, and with fast, O ( N ) , RLS methods will be reported in Ng and Plemmons [9]. Other methods being considered include a frequency domain FFT-based frequency domain version of the quasi-Newton LMS adaptive filtering algorithm by Marshall and Jenkens [7], along with a hybrid LMS-RLS scheme that is also based on FFT computations [9]. Preliminary results indicate that these iterative methods can compete with direct methods in an adaptive signal processing environmen t .…”
Section: Commentsmentioning
confidence: 99%