2009 International Conference on High Performance Computing (HiPC) 2009
DOI: 10.1109/hipc.2009.5433194
|View full text |Cite
|
Sign up to set email alerts
|

A parallel algorithm for exact Bayesian network inference

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(15 citation statements)
references
References 20 publications
0
15
0
Order By: Relevance
“…In 2009, Nikolova et al [Nikolova et al, 2009] developed the first parallel algorithm for exact BN structure learning that is both work and space optimal. While this work allows pushing the limits on the scale of networks that can be inferred with precision, the NP-hard nature of the problem limits the applicability of this solution to small networks of 30-40 nodes.…”
Section: Max-min Hill Climbing (Mmhc) [Tsamardinos Et Al 2006] Algomentioning
confidence: 99%
See 2 more Smart Citations
“…In 2009, Nikolova et al [Nikolova et al, 2009] developed the first parallel algorithm for exact BN structure learning that is both work and space optimal. While this work allows pushing the limits on the scale of networks that can be inferred with precision, the NP-hard nature of the problem limits the applicability of this solution to small networks of 30-40 nodes.…”
Section: Max-min Hill Climbing (Mmhc) [Tsamardinos Et Al 2006] Algomentioning
confidence: 99%
“…Each level in the lattice can be computed concurrently, with data flowing from one level to the next. [Nikolova et al, 2009] present an optimal parallel algorithm for the problem of exact induction of optimal BN structures from data. While the memory requirements in the problem of exact BN structure learning would be prohibitive at the scale of interest in this work, the mapping strategy can be adapted for the problem of learning optimal parents sets.…”
Section: Optimal Parents Set For a Single Variablementioning
confidence: 99%
See 1 more Smart Citation
“…As we have discussed in chapter 1, several parallel algorithms have already been developed for solving the structure learning problem, i.e., finding an optimal Bayesian network. In particular, Nikolova et al (2009Nikolova et al ( , 2013 described a parallel algorithm that can realize direct parallelization of the sequential DP algorithm in Ott et al (2004) with optimal parallel efficiency. This algorithm is based on the observation that the subproblems constitute a lattice equivalent to an n-dimensional (n-D) hypercube, which has been proved to be a very powerful interconnection network topology used by most of modern parallel computer systems (Dally and Towles, 2004;Ananth et al, 2003;Loh et al, 2005).…”
Section: Introductionmentioning
confidence: 99%
“…Our algorithm realizes direct parallelization of the DP algorithm in (Koivisto, 2006a) with nearly perfect load-balancing and optimal parallel time and space efficiency, i.e., the time and space complexity per processor are O(n2 n−k ) respectively, for p number of processors, where k = log(p). Our parallel algorithm is an extension of Nikolova et al (2009)'s hypercube algorithm to the structure discovery problem.…”
Section: Introductionmentioning
confidence: 99%