2021
DOI: 10.1109/tsp.2021.3068356
|View full text |Cite
|
Sign up to set email alerts
|

Cramér-Rao Bound for Estimation After Model Selection and Its Application to Sparse Vector Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 62 publications
0
11
0
Order By: Relevance
“…The Cramer Rao Lower Bound (CRLB) on the mean square error of unbiased estimator is a frequently used metric for determining the correctness of parameter estimate based on a set of data [50]. The CRLB of the algorithm provides the criterion to know the minimum value of error, that can be achieved by the algorithm.…”
Section: B Cramer Rao Lower Boundmentioning
confidence: 99%
“…The Cramer Rao Lower Bound (CRLB) on the mean square error of unbiased estimator is a frequently used metric for determining the correctness of parameter estimate based on a set of data [50]. The CRLB of the algorithm provides the criterion to know the minimum value of error, that can be achieved by the algorithm.…”
Section: B Cramer Rao Lower Boundmentioning
confidence: 99%
“…In particular, the CCRB [26][27][28][29], which is associated with the CML estimator, is unsuited as a bound on the performance of Good-Turing estimators outside the asymptotic region, while it provides a lower bound on the MSE of any χ-unbiased estimator [28][29][30]. Our recent works on non-Bayesian estimation after selection [31][32][33] suggest that conditional schemes, in which the performance criterion depends on the observed data, require different CRB-type bounds.…”
Section: B Related Workmentioning
confidence: 99%
“…The measure of closeness is determined by the considered cost function, C( θ, θ). Examples for Lehmann unbiasedness with different cost functions and under parametric constraints can be found in [28,29,[31][32][33][34]43].…”
Section: A Lehmann Unbiasednessmentioning
confidence: 99%
See 1 more Smart Citation
“…Lehmann [31] proposed a generalization of the unbiasedness concept based on the chosen cost function for each scenario, which can be used for various cost functions (see, e.g. [14], [14], [34], [36], [37]). The following proposition defines the graph unbiasedness property of estimators w.r.t.…”
Section: A Graph Unbiasednessmentioning
confidence: 99%