2018
DOI: 10.1007/s10994-018-5701-9
|View full text |Cite
|
Sign up to set email alerts
|

Approximate structure learning for large Bayesian networks

Abstract: We present approximate structure learning algorithms for Bayesian networks. We discuss the two main phases of the task: the preparation of the cache of the scores and structure optimization, both with bounded and unbounded treewidth. We improve on state-of-the-art methods that rely on an ordering-based search by sampling more effectively the space of the orders. This allows for a remarkable improvement in learning Bayesian networks from thousands of variables. We also present a thorough study of the accuracy a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 37 publications
(37 citation statements)
references
References 14 publications
0
37
0
Order By: Relevance
“…Identification of the parent set and an optimized structure allow easy learning in Bayesian networks. A dynamic Bayesian network deals with uncertainties following the addition of time-related information [39]. The probabilistic prediction model requires learning of the structure and parameters, as well as probabilistic reasoning.…”
Section: Related Workmentioning
confidence: 99%
“…Identification of the parent set and an optimized structure allow easy learning in Bayesian networks. A dynamic Bayesian network deals with uncertainties following the addition of time-related information [39]. The probabilistic prediction model requires learning of the structure and parameters, as well as probabilistic reasoning.…”
Section: Related Workmentioning
confidence: 99%
“…This fact has been reflected in the scope and generalization of discussion on the results and conclusions. This is mainly due to the characteristics of the Bayesian analysis that, even though the missing variables are suitably supported, are affected by the low number of variables included in the study or, conversely, by their high number ( Scanagatta et al, 2018 ).…”
Section: Limitations Of This Studymentioning
confidence: 99%
“…However, this method cannot share subexpressions in its decomposable pattern expressions. In addition, Scanagatta et al 38 presented approximate structure learning algorithms for Bayesian networks, where they improved on state-of-the-art methods that rely on an ordering-based search by sampling more effectively the space of the orders, including parent set identification, structure optimization, and structure optimization under bounded treewidth. Liu and Liu 39 proposed a maximum relevance minimum common redundancy (mRMCR) algorithm based on the information theory and feature selection, where they established a mutual information solution formula on the preference database and designed a formula for calculating mRMCR.…”
Section: Related Workmentioning
confidence: 99%