2017
DOI: 10.1016/j.artint.2015.03.003
|View full text |Cite
|
Sign up to set email alerts
|

Integer Linear Programming for the Bayesian network structure learning problem

Abstract: Bayesian networks are a commonly used method of representing conditional probability relationships between a set of variables in the form of a directed acyclic graph (DAG). Determination of the DAG which best explains observed data is an NP-hard problem [1]. This problem can be stated as a constrained optimisation problem using Integer Linear Programming (ILP). This paper explores how the performance of ILP-based Bayesian network learning can be improved through ILP techniques and in particular through the add… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
88
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 121 publications
(103 citation statements)
references
References 16 publications
(23 reference statements)
0
88
0
Order By: Relevance
“…Bounds greater than 2 can already become prohibitive. For instance, a bound of k = 2 is adopted in [8] when dealing with its largest data set (diabetes), which contains 413 variables. One way of circumventing the problem is to apply pruning rules which allow us to discard/ignore elements of L i in such a way that an optimal parent set is never discarded/ignored.…”
Section: Structure Learning Of Bayesian Networkmentioning
confidence: 99%
“…Bounds greater than 2 can already become prohibitive. For instance, a bound of k = 2 is adopted in [8] when dealing with its largest data set (diabetes), which contains 413 variables. One way of circumventing the problem is to apply pruning rules which allow us to discard/ignore elements of L i in such a way that an optimal parent set is never discarded/ignored.…”
Section: Structure Learning Of Bayesian Networkmentioning
confidence: 99%
“…Unfortunately, it has been proved that learning an optimal BN is NP-hard [20]. One practical approach to dealing with the intractable complexity is to learn a constrained or totally pre-fixed network structure.…”
Section: Naive Bayesmentioning
confidence: 99%
“…Given seven algorithms and 40 datasets, CD can be calculated with Equation (20) and is equal to 1.4245. Following the graphical presentation proposed by Demšar [31], we show the comparison of these algorithms against each other with the Nemenyi test in Figure 5.…”
Section: As Shown Inmentioning
confidence: 99%
“…The search is implemented as a branch-and-cut approach, the essentials of which are outlined next. The IP formulation [Bartlett and Cussens, 2017] with which we work is shown in Figure 1. The binary IP variables used, x i←J , correspond to node i having the set of nodes J as parents in the network.…”
Section: Bn Structure Learningmentioning
confidence: 99%
“…Learning an optimal BN structure is a computationally challenging problem: even the restriction of the BNSL problem where only BDe scores [Heckerman et al, 1995] are allowed is known to be NP-hard [Chickering, 1996]. Due to NP-hardness, much work on BNSL has focused on developing approximate, local search style algorithms [Tsamardinos et al, 2006] that in general cannot guarantee that optimal structures in terms of the objective function are found.…”
Section: Introductionmentioning
confidence: 99%