2010
DOI: 10.1109/tcsi.2010.2050222
|View full text |Cite
|
Sign up to set email alerts
|

A Matrix Pseudoinversion Lemma and Its Application to Block-Based Adaptive Blind Deconvolution for MIMO Systems

Abstract: is well known in the literature that this formula is very useful to develop a block-based recursive least squares algorithm for the block-based recursive identification of linear systems or the design of adaptive filters. We extend this result to the case when the matrix is singular and present a matrix pseudoinversion lemma along with some illustrative examples. Based on this result, we propose a block-based adaptive multichannel superexponential algorithm. We present simulation results for the performance of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2010
2010
2019
2019

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(19 citation statements)
references
References 19 publications
0
19
0
Order By: Relevance
“…Some non iterative procedures based on the evaluation of generalized pseudoinverse matrices have been recently proposed as novel learning algorithms for SLFNs: among them the method to improve performance of multilayer perceptron by Halawa [2], the extreme learning machine (elm) [3] and some studies more application oriented [4,5,6]. Usually, input weights (linking input and hidden layers) are randomly chosen, and output weights (linking hidden and output layers) are analytically determined by the Moore-Penrose (MP) generalized inverse (or pseudoinverse).…”
Section: Introductionmentioning
confidence: 99%
“…Some non iterative procedures based on the evaluation of generalized pseudoinverse matrices have been recently proposed as novel learning algorithms for SLFNs: among them the method to improve performance of multilayer perceptron by Halawa [2], the extreme learning machine (elm) [3] and some studies more application oriented [4,5,6]. Usually, input weights (linking input and hidden layers) are randomly chosen, and output weights (linking hidden and output layers) are analytically determined by the Moore-Penrose (MP) generalized inverse (or pseudoinverse).…”
Section: Introductionmentioning
confidence: 99%
“…Differently from the EVM in [5], (13) means that i w  is modified iteratively by the value of the right-hand side of (13) …”
Section: The Proposed Algorithmmentioning
confidence: 99%
“…Therefore, our proposed algorithm for achieving the BD is that the vector i w  is modified by using the value i d R † in (13), and then the modified vector, that is, i w  in the left-hand side of (13) is normalized by (14).…”
Section: The Proposed Algorithmmentioning
confidence: 99%
See 2 more Smart Citations