Proceedings of the 26th Annual International Conference on Machine Learning 2009
DOI: 10.1145/1553374.1553389
|View full text |Cite
|
Sign up to set email alerts
|

Structure learning of Bayesian networks using constraints

Abstract: This paper addresses exact learning of Bayesian network structure from data and expert's knowledge based on score functions that are decomposable. First, it describes useful properties that strongly reduce the time and memory costs of many known methods such as hill-climbing, dynamic programming and sampling variable orderings. Secondly, a branch and bound algorithm is presented that integrates parameter and structural constraints with data in a way to guarantee global optimality with respect to the score func… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
171
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
7
2

Relationship

3
6

Authors

Journals

citations
Cited by 142 publications
(178 citation statements)
references
References 10 publications
(5 reference statements)
3
171
0
Order By: Relevance
“…Significantly, the use of the Frobenius Norm of the difference of the Bayesian networks's graph Laplacians is very encouraging and suggests further research into distance measures based on graph features such as those derived from Spectral Graph Theory. Experiments with alternate starting states based on conditional information, in a manner similar to the PC Algorithm and CBL, or constraint-based algorithms like Incremental Association or HITON, or even to those claiming to find the exact network structure [10] could also be promising.…”
Section: Discussionmentioning
confidence: 99%
“…Significantly, the use of the Frobenius Norm of the difference of the Bayesian networks's graph Laplacians is very encouraging and suggests further research into distance measures based on graph features such as those derived from Spectral Graph Theory. Experiments with alternate starting states based on conditional information, in a manner similar to the PC Algorithm and CBL, or constraint-based algorithms like Incremental Association or HITON, or even to those claiming to find the exact network structure [10] could also be promising.…”
Section: Discussionmentioning
confidence: 99%
“…The optimality of such an idea can be easily proven by the following lemma, which guarantees that we should use only X 0 as parent of X i every time such choice is better than using {X 0 , X j }. It is a straightforward generalization of Lemma 1 in [16]. Lemma 1.…”
Section: Improving Learning Of Tansmentioning
confidence: 93%
“…Past literature experiments (Cussens, 2011;de Campos and Ji, 2011;Jaakkola et al, 2010) indicate that DP is the fastest method for small values of m (fewer than 15 − 20 variables), IP is the best method from 15 − 20 to a hundred, and IP and BB are anytime algorithms, so can be run even with large datasets, and then the accuracy keeps improving with time. K2 is the only non-exact method (that is, not guaranteed to converge to a global maximum solution), but it is very efficient, so we try to use the K2 search as much as possible, that is, we use K2 as long as it can find an improving solution with respect to the previous run (if it cannot, then a globally optimal method is used).…”
Section: Learning Bayesian Networkmentioning
confidence: 99%
“…These methods are mostly based on turning the incomplete data into a complete dataset (or even directly updating the sufficient statistics), and then recurring to particular methods for complete data. We adopt a meta-search composed of a few distinct methods (Jaakkola et al, 2010;de Campos and Ji, 2011;Cooper and Herskovits, 1992;Silander and Myllymaki, 2006) that selects the best procedure to run depending on the number of covariates and running-time of the methods. The idea is to improve the score in the most efficient way, still with the guarantee of achieving optimality.…”
Section: Introductionmentioning
confidence: 99%