1999
DOI: 10.1111/1368-423x.00032
|View full text |Cite
|
Sign up to set email alerts
|

Conditions for convergence of Monte Carlo EM sequences with an application to product diffusion modeling

Abstract: Intractable maximum likelihood problems can sometimes be finessed with a Monte Carlo implementation of the E M algorithm. However, there appears to be little theory governing when Monte Carlo E M (MC E M) sequences converge. Consequently, in some applications, convergence is assumed rather than proved. Motivated by this problem in the context of modeling market penetration of new products and services over time, we develop (i) high-level conditions for rates of almost-sure convergence and convergence in distri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2003
2003
2022
2022

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(18 citation statements)
references
References 37 publications
0
18
0
Order By: Relevance
“…Statisticians have explored a great many variants on the EM algorithm, some of which have natural interpretations in the context of iterated learning. In particular, Monte Carlo EM algorithms where m n > 1 (as opposed to stochastic EM, where m n = 1) are directly applicable to cases of iterated learning, and have been studied extensively (Sherman, et al, 1999;Fort & Moulines, 203). Statisticians have also investigated a version of the stochastic EM algorithm in which the samples of latent variables from previous iterations are also incorporated, providing a natural way of modeling language evolution when learners are exposed to linguistic data produced by more than one previous generation (Celeux & Diebolt, 1992;Celeux et al, 1995;Delyon, Lavielle, & Moulines, 1999).…”
Section: Language Evolution and Algorithms For Statistical Inferencementioning
confidence: 99%
“…Statisticians have explored a great many variants on the EM algorithm, some of which have natural interpretations in the context of iterated learning. In particular, Monte Carlo EM algorithms where m n > 1 (as opposed to stochastic EM, where m n = 1) are directly applicable to cases of iterated learning, and have been studied extensively (Sherman, et al, 1999;Fort & Moulines, 203). Statisticians have also investigated a version of the stochastic EM algorithm in which the samples of latent variables from previous iterations are also incorporated, providing a natural way of modeling language evolution when learners are exposed to linguistic data produced by more than one previous generation (Celeux & Diebolt, 1992;Celeux et al, 1995;Delyon, Lavielle, & Moulines, 1999).…”
Section: Language Evolution and Algorithms For Statistical Inferencementioning
confidence: 99%
“…However, since we can simulate U , and X u for any u by using the EA, we adopt an MC implementation of the EM algorithm. The MCEM algorithm was introduced in Wei and Tanner (1990); convergence and implementation issues were tackled in Chan and Ledolter (1995), Sherman et al. (1999) and Fort and Moulines (2003).…”
Section: A Monte Carlo Expectation–maximization Approachmentioning
confidence: 99%
“…However, since we can simulate U, and X u for any u by using the EA, we adopt an MC implementation of the EM algorithm. The MCEM algorithm was introduced in Wei and Tanner (1990); convergence and implementation issues were tackled in Chan and Ledolter (1995), Sherman et al (1999) and Fort and Moulines (2003). It is well documented (see for example Fort and Moulines (2003), and references therein) that the number of MC samples that are used to approximate the expectation should increase with the EM iterations.…”
Section: Monte Carlo Expectation-maximization For Diffusions With Knomentioning
confidence: 99%
“…When random intercepts are included in order to take into account unobserved heterogeneity, the EM algorithm shall be modified to an Monte Carlo expectation-maximization algorithm (e.g. [40,42]). These algorithms are described in Appendix 1.…”
Section: Fractional Bfs For Testing Hypothesesmentioning
confidence: 99%