2021
DOI: 10.1214/20-aos2022
|View full text |Cite
|
Sign up to set email alerts
|

SuperMix: Sparse regularization for mixtures

Abstract: This paper investigates the statistical estimation of a discrete mixing measure µ 0 involved in a kernel mixture model. Using some recent advances in 1regularization over the space of measures, we introduce a "data fitting and regularization" convex program for estimating µ 0 in a grid-less manner from a sample of mixture law, this method is referred to as Beurling-LASSO.Our contribution is two-fold: we derive a lower bound on the bandwidth of our data fitting term depending only on the support of µ 0 and its … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 33 publications
(98 reference statements)
0
4
0
Order By: Relevance
“…This choice of gradient with the cone metric enables multiplicative updates in r and additive in x, the two updates being independent of each other. Then, the algorithm consists of a gradient descent with the definition of r i (t) and x i (t) according to [2,23]:…”
Section: Numerical Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This choice of gradient with the cone metric enables multiplicative updates in r and additive in x, the two updates being independent of each other. Then, the algorithm consists of a gradient descent with the definition of r i (t) and x i (t) according to [2,23]:…”
Section: Numerical Resultsmentioning
confidence: 99%
“…The point is to estimate parameters (a i ) ∈ R N and (x i ) ∈ X N of a mixture ∑ N i=1 a i ϕ(x i ) of N elementary distributions described by ϕ. For instance, one wants to retrieve the means µ i ∈ R and standard deviations σ i ∈ R + of a Gaussian mixture, see [2] for more insights on this question: • deep learning such as training neural networks with a single hidden layer [3]; • signal processing, for instance low rank tensor decomposition for Direction of Arrival estimation through sensor array (multiple sampling points); • super-resolution, a rather central problem in image processing. Roughly speaking, it consists of the reconstruction of details from an altered input of signal/image.…”
Section: Introductionmentioning
confidence: 99%
“…The insightful paper of [26] builds certificates in a quite general setting for a one dimensional parameter set Θ. In [22], the authors exhibit certificate functions to deal with more general probability density models where Θ is multidimensional. However they are restricted to translation invariant dictionaries (16).…”
Section: Certificatesmentioning
confidence: 99%
“…For results on a wider range of dictionaries, let us highlight the work of [26] that gives recovery and robustness to noise results for spike deconvolution. Let us also mention the recent work of [8] that generalizes some exact recovery results for a broader family of dictionaries as well as the paper [7] that gives robustness to noise guarantees for a family of shifted functions (ϕ(θ) = k(• − θ), θ ∈ Θ) of a given specific function k. In a density model that is a mixture of shifted functions, [22] studies a modification of the BLasso by considering a weighted L 2 prediction error.…”
mentioning
confidence: 99%