1995
DOI: 10.1016/0167-9473(93)e0056-a
|View full text |Cite
|
Sign up to set email alerts
|

The EM algorithm for graphical association models with missing data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
259
0
4

Year Published

2003
2003
2016
2016

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 526 publications
(263 citation statements)
references
References 24 publications
0
259
0
4
Order By: Relevance
“…Our analysis revealed that under standard independence assumptions the off-diagonal of the classifiers covariance matrix corresponds to a rank-one matrix, whose eigenvector entries are proportional to the classifiers balanced accuracies. Our work gives a computationally efficient and asymptotically consistent solution to the classical problem posed by Dawid and Skene (15) in 1979, for which to the best of our knowledge only nonconvex iterative likelihood maximization solutions have been proposed (18,(26)(27)(28)(29). Our work not only provides a principled spectral approach for unsupervised ensemble learning (such as our SML), but also raises several interesting questions for future research.…”
Section: Summary and Discussionmentioning
confidence: 99%
“…Our analysis revealed that under standard independence assumptions the off-diagonal of the classifiers covariance matrix corresponds to a rank-one matrix, whose eigenvector entries are proportional to the classifiers balanced accuracies. Our work gives a computationally efficient and asymptotically consistent solution to the classical problem posed by Dawid and Skene (15) in 1979, for which to the best of our knowledge only nonconvex iterative likelihood maximization solutions have been proposed (18,(26)(27)(28)(29). Our work not only provides a principled spectral approach for unsupervised ensemble learning (such as our SML), but also raises several interesting questions for future research.…”
Section: Summary and Discussionmentioning
confidence: 99%
“…The EM algorithm has been adapted to the particular case of Bayesian networks [16]. In BN some nodes X i may represent variables that are missing.…”
Section: Em Algorithm For Bayesian Networkmentioning
confidence: 99%
“…To solve this problem we use the Expectation Maximization (EM) algorithm [15] and more particularly its adaptation to the case of Bayesian networks [16]. Our previously mentioned expert model of the GPON-FTTH network [8] is used as an initialization point for the EM algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…For example, the MS-EM (Model Selection -Expectation Maximization) [22], which plays relatively few iterations to find the best network with incomplete data, implements a version of the EM and uses a metric to choose the best Bayesian model. Other methods applying the EM algorithm can be seen in [15,23].…”
Section: Related Workmentioning
confidence: 99%