2018
DOI: 10.1016/j.artint.2018.04.002
|View full text |Cite
|
Sign up to set email alerts
|

Entropy-based pruning for learning Bayesian networks using BIC

Abstract: For decomposable score-based structure learning of Bayesian networks, existing approaches first compute a collection of candidate parent sets for each variable and then optimize over this collection by choosing one parent set for each variable without creating directed cycles while maximizing the total score. We target the task of constructing the collection of candidate parent sets when the score of choice is the Bayesian Information Criterion (BIC). We provide new non-trivial results that can be used to prun… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(23 citation statements)
references
References 6 publications
0
23
0
Order By: Relevance
“…To infer or evaluate borrowers' credit based on BN, we first need to learn the BN model ( , ) from data set , including its structure and parameters. Generally, the BIC score [8] can be used to measure the model; the higher the BIC score, the better the obtained BN model.…”
Section: Bn Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…To infer or evaluate borrowers' credit based on BN, we first need to learn the BN model ( , ) from data set , including its structure and parameters. Generally, the BIC score [8] can be used to measure the model; the higher the BIC score, the better the obtained BN model.…”
Section: Bn Modelmentioning
confidence: 99%
“…The Bayesian network (BN) [6] can be used to represent and deal with uncertainty relationships between variables and has good interpretability, so many efficient BN model learning algorithms [7][8][9][10] have been proposed. The BN model has been applied in many fields.…”
Section: Introductionmentioning
confidence: 99%
“…(1) The criteria of model selection are also called the scoring function, and the function evaluates the fitting degree between the learned model and the observed data . Here, we only introduce the BIC (Bayesian information criterion) scoring function [12][13], and its form is…”
Section: Bayesian Network Modelmentioning
confidence: 99%
“…Wang et al [23] considered incorporating ancestral constraints into exact learning algorithms based on the order graph. Tan et al [24] proposed a bidirectional heuristic algorithm to search the order graph. The above algorithms are more based on search strategies to improve the efficiency of the exact learning of the BN structure.…”
Section: Introductionmentioning
confidence: 99%