2018
DOI: 10.1109/tit.2017.2783543
|View full text |Cite
|
Sign up to set email alerts
|

Cluster-Seeking James–Stein Estimators

Abstract: This paper considers the problem of estimating a high-dimensional vector of parameters θ ∈ R n from a noisy observation. The noise vector is i.i.d. Gaussian with known variance. For a squared-error loss function, the James-Stein (JS) estimator is known to dominate the simple maximum-likelihood (ML) estimator when the dimension n exceeds two. The JSestimator shrinks the observed vector towards the origin, and the risk reduction over the ML-estimator is greatest for θ that lie close to the origin. JS-estimators … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1
1

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(12 citation statements)
references
References 26 publications
(55 reference statements)
0
12
0
Order By: Relevance
“…The proof of the theorem is given in [8]. The proof also leads to the following result on the performance of Lindley's positive-part estimatorθ L+ given by (7).…”
Section: A Two-cluster James-stein Estimatormentioning
confidence: 92%
See 4 more Smart Citations
“…The proof of the theorem is given in [8]. The proof also leads to the following result on the performance of Lindley's positive-part estimatorθ L+ given by (7).…”
Section: A Two-cluster James-stein Estimatormentioning
confidence: 92%
“…Recall that the symbol ' . =' is shorthand for a concentration inequality of the form (8). The proof of the lemma is given in [8].…”
Section: A Two-cluster James-stein Estimatormentioning
confidence: 99%
See 3 more Smart Citations