[Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing 1991
DOI: 10.1109/icassp.1991.150863
|View full text |Cite
|
Sign up to set email alerts
|

Triangular factorization of inverse data covariance matrices

Abstract: A new Cholesky factorization of the inverse covariance matrix can be performed with fully parallel matrix-vector operations, instead of more costly back substitutions. This factorization reformulates the Sherman-Morrison-Woodbury matrix inverse identity as a downdating problem. Givens rotations provide triangularized factors of the inverse data covariance matrix, and the final adaptive solution is obtained after two triangular matrix-vector products. This new factorization algorithm operates in the voltage (or… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2010
2010
2018
2018

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 2 publications
0
7
0
Order By: Relevance
“…Now from (13), (7) and (4), it can be seen that (9) (proposed in [4]) and (11) actually reveal the relation between the m th and the (m − 1) th order inverse Cholesky factor of the matrix R. This relation is also utilized to implement adaptive filters in [15], [16], where the m th order inverse Cholesky factor is obtained from the m th order Cholesky factor [15, equation (12)], [16, equation (16)]. Thus the algorithms in [15], [16] are still similar to the conventional matrix inversion algorithm [17] using Cholesky factorization, where the inverse Cholesky factor is computed from the Cholesky factor by the back-substitution (for triangular matrix inversion), an inherent serial process unsuitable for the parallel implementation [18]. Contrarily, the proposed algorithm computes the inverse Cholesky factor of April 2, 2020 DRAFT R m from R m directly, as shown in (17) and (11).…”
Section: A Fast Algorithm For Inverse Cholesky Factorizationmentioning
confidence: 99%
“…Now from (13), (7) and (4), it can be seen that (9) (proposed in [4]) and (11) actually reveal the relation between the m th and the (m − 1) th order inverse Cholesky factor of the matrix R. This relation is also utilized to implement adaptive filters in [15], [16], where the m th order inverse Cholesky factor is obtained from the m th order Cholesky factor [15, equation (12)], [16, equation (16)]. Thus the algorithms in [15], [16] are still similar to the conventional matrix inversion algorithm [17] using Cholesky factorization, where the inverse Cholesky factor is computed from the Cholesky factor by the back-substitution (for triangular matrix inversion), an inherent serial process unsuitable for the parallel implementation [18]. Contrarily, the proposed algorithm computes the inverse Cholesky factor of April 2, 2020 DRAFT R m from R m directly, as shown in (17) and (11).…”
Section: A Fast Algorithm For Inverse Cholesky Factorizationmentioning
confidence: 99%
“…implementation [18], and then requires only half divisions. Usually the expected, rather than worst-case, complexities of various algorithms are studied [20].…”
Section: Discussionmentioning
confidence: 99%
“…while a satisfying (18) can be computed by (17a). We can use (17) and (11) to compute F from F −1 iteratively till we get F .…”
Section: A Fast Algorithm For Inverse Cholesky Factorizationmentioning
confidence: 99%
See 2 more Smart Citations