2021
DOI: 10.1214/21-ejs1905
|View full text |Cite
|
Sign up to set email alerts
|

Improved convergence guarantees for learning Gaussian mixture models by EM and gradient EM

Abstract: We consider the problem of estimating the parameters a Gaussian Mixture Model with K components of known weights, all with an identity covariance matrix. We make two contributions. First, at the population level, we present a sharper analysis of the local convergence of EM and gradient EM, compared to previous works. Assuming a separation of Ω( √ log K), we prove convergence of both methods to the global optima from an initialization region larger than those of previous works. Specifically, the initial guess o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…where y k is given by (30) and A k is given by (31), and is the Hadamard product. Notably, we can rewrite (30) as (32), where q(p, , ω; , m, s) is given by (33). Furthermore, we can rewrite A k from (31) as (34), where g( , ω; , m, s; ˜ , m, s) is given by (35).…”
Section: B Em Iterationsmentioning
confidence: 99%
See 2 more Smart Citations
“…where y k is given by (30) and A k is given by (31), and is the Hadamard product. Notably, we can rewrite (30) as (32), where q(p, , ω; , m, s) is given by (33). Furthermore, we can rewrite A k from (31) as (34), where g( , ω; , m, s; ˜ , m, s) is given by (35).…”
Section: B Em Iterationsmentioning
confidence: 99%
“…and the shifted and padded PSWF eigenfunctions ψ N,n , denoted as q(p, , ω; , m, s) (33), can be computed just once at the beginning of the algorithm with complexity O N 2 L 2 K 5 max . All in all, the computational complexity of computing…”
Section: E Complexity Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Most of the wrong division is caused by burrs on the yarn evenness. The EM algorithm [28][29][30] classifies the pixels according to the grayscale distribution of the image, and then divides the image according to the attributes of the class. This algorithm can not only segment the complete yarn evenly, but also preserves the details of the contour.…”
Section: Yarn Evenness Extractionmentioning
confidence: 99%