2010 Conference Record of the Forty Fourth Asilomar Conference on Signals, Systems and Computers 2010
DOI: 10.1109/acssc.2010.5757886
|View full text |Cite
|
Sign up to set email alerts
|

A lower bound on the estimator variance for the sparse linear model

Abstract: We study the performance of estimators of a sparse nonrandom vector based on an observation which is linearly transformed and corrupted by additive white Gaussian noise. Using the reproducing kernel Hilbert space framework, we derive a new lower bound on the estimator variance for a given differentiable bias function (including the unbiased case) and an almost arbitrary transformation matrix (including the underdetermined case considered in compressed sensing theory). For the special case of a sparse vector co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
15
0

Year Published

2011
2011
2014
2014

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 7 publications
(16 citation statements)
references
References 11 publications
1
15
0
Order By: Relevance
“…This is an important difference from the bound given in Theorem VI.1 and, also, from the bound to be given in Theorem VIII.8. Furthermore, still for H = I and c(·) ≡ 0, the bound (34) can be shown [36], [31, p. 106] to be tighter (higher) than the bounds in Theorem VI.1 and Theorem VIII.8.…”
Section: B a Novel Crb-type Lower Variance Boundmentioning
confidence: 99%
“…This is an important difference from the bound given in Theorem VI.1 and, also, from the bound to be given in Theorem VIII.8. Furthermore, still for H = I and c(·) ≡ 0, the bound (34) can be shown [36], [31, p. 106] to be tighter (higher) than the bounds in Theorem VI.1 and Theorem VIII.8.…”
Section: B a Novel Crb-type Lower Variance Boundmentioning
confidence: 99%
“…These matrices can be characterized by such concepts as the spark 22 which was formally defined by Donoho and Elad [5], restricted isometry property (RIP) introduced by Cand`es and Tao [6], mutual coherence (MC) [7][8][9], and null space property (NSP) [9,12]. as a merit function for sparsity can be traced back several decades in a wide range of areas from seismic traces [13], sparse-signal recovery [14], sparse-model selection (LASSO algorithm) in statistics [15] to image processing [16], and continues its growth in other areas [17][18][19].…”
Section: Introductionmentioning
confidence: 99%
“…However, the calculation of the HCRB for general sensing matrix A is much more complicated, and therefore attention is focussed only on the simplest case of unit sensing matrix in this section [27,40]. Nevertheless, the HCRB of this simple case is still instructive for us to have a qualitative understanding of the HCRB for general cases.…”
Section: The Hammersley-chapman-robbins Boundmentioning
confidence: 99%
“…Their closed-form result is tighter than ours when β is sufficiently large, but is not as tight as ours in the low β range and fails to close the gap between maximal and nonmaximal cases. In [40], another lower bound is provided and is tighter than ours for all β > 0, but the derivation is based on the RKHS formulation of the BB which is difficult to be generalized when σ 2 e = 0. Despite that our bound is not the tightest, it still provides a correct qualitative trend of the lower bound of sparse estimation, and is able to deal with matrix perturbation without much effort.…”
Section: Theoremmentioning
confidence: 99%