2018
DOI: 10.1214/18-ejs1430
|View full text |Cite
|
Sign up to set email alerts
|

A deconvolution path for mixtures

Abstract: We propose a class of estimators for deconvolution in mixture models based on a simple two-step "bin-and-smooth" procedure applied to histogram counts. The method is both statistically and computationally efficient: by exploiting recent advances in convex optimization, we are able to provide a full deconvolution path that shows the estimate for the mixing distribution across a range of plausible degrees of smoothness, at far less cost than a full Bayesian analysis. This enables practitioners to conduct a sensi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 52 publications
0
4
0
Order By: Relevance
“…In many statistical problems, we observe two unknown distributions indirectly and aim to investigate the difference between them. 35 The unknown distribution can be estimated through deconvolution, in accordance with existing methods. 11 However, these methods are only designed for one‐sample estimation.…”
Section: Discussionmentioning
confidence: 99%
“…In many statistical problems, we observe two unknown distributions indirectly and aim to investigate the difference between them. 35 The unknown distribution can be estimated through deconvolution, in accordance with existing methods. 11 However, these methods are only designed for one‐sample estimation.…”
Section: Discussionmentioning
confidence: 99%
“…Here, we argue that this phenomenon is another example of Stein's paradox [Efron and Morris, 1977, James and Stein, 1961, Stigler, 1990 We then discuss how this connects to ideas in Monte Carlo sampling, in particular importance sampling, and various improvements such as Riemann sums [Philippe, 1997, Philippe andRobert, 2001] or vertical likelihood integration that applies 'binning and smoothing' using a score-function heurism to choose the weight function [Madrid-Padilla et al, 2018, Polson andScott, 2014]. We then show how the idea of binning and smoothing also improves the HT estimator [Ghosh, 2015] in the apparent weakness example due to Wasserman [2004].…”
Section: Adaptive Normalizationmentioning
confidence: 97%
“…There is an extensive statistics literature addressing the additive deconvolution problem (We do not discuss in what follows other deconvolution problems that are not relevant to our situation, such as those related with composition of distributions, which usually require complete knowledge of the noise distribution 15 , 16 ). A common set of deconvolution methods are kernel-based approaches, such as those relying on Fourier transforms, 17 , 18 , 19 , 20 , 21 , 22 , 23 which use the fact that in Fourier space, a deconvolution is simply the product of two functions.…”
Section: Introductionmentioning
confidence: 99%
“…As in the case of kernel-based methods, these approaches provide us with point estimates, and usually assume exact knowledge of the noise distribution. 16 Additionally, these methods commonly use kernel density estimation to represent the data, 10 which is rather limited when dealing with multidimensional datasets. Finally, a third class of methods involve Bayesian inference, 26 , 27 , 28 which does not require complete knowledge of the noise distribution and naturally provides confidence intervals of the estimates obtained.…”
Section: Introductionmentioning
confidence: 99%