2016
DOI: 10.1007/s41237-016-0006-4
|View full text |Cite
|
Sign up to set email alerts
|

A theoretical analysis of the BDeu scores in Bayesian network structure learning

Abstract: In Bayes score-based Bayesian network structure learning (BNSL), we are to specify two prior probabilities: over the structures and over the parameters. In this paper, we mainly consider the parameter priors, in particular for the BDeu (Bayesian Dirichlet equivalent uniform) and Jeffreys' prior. In model selection, given examples, we typically consider how well a model explains the examples and how simple the model is, and choose the best one for the criteria. In this sense, if a model A is better than another… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
35
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 27 publications
(36 citation statements)
references
References 9 publications
1
35
0
Order By: Relevance
“…It is typically used with small imaginary sample sizes such as α i = 1 as suggested by [2] and [41]. Alternative BD scores have been proposed in [42] and [43,44].…”
Section: Statistical Criteria: Conditional Independence Tests and Netmentioning
confidence: 99%
“…It is typically used with small imaginary sample sizes such as α i = 1 as suggested by [2] and [41]. Alternative BD scores have been proposed in [42] and [43,44].…”
Section: Statistical Criteria: Conditional Independence Tests and Netmentioning
confidence: 99%
“…for α ijk = 1 we obtain the K2 score from Cooper and Herskovits (1991); -for α ijk = 1 /2 we obtain the BD score with Jeffrey's prior (BDJ; Suzuki, 2016); -for α ijk = α/(r i q i ) we obtain the BDeu score from Heckerman et al (1995), which is the most common choice in the BD family and has α i = α for all X i ; -for α ijk = α/(r iqi ), whereq i is the number of Π G Xi such that n ij > 0, we obtain the BD sparse (BDs) score recently proposed in Scutari (2016);…”
Section: Bayesian Dirichlet Marginal Likelihoodsmentioning
confidence: 99%
“…Finally, Suzuki (2016) studied the asymptotic properties of BDeu by contrasting it with BDJ. He found that BDeu is not regular in the sense that it may learn DAGs in a way that is not consistent with either the MDL principle (through BIC) or the ranking of those DAGs given by their entropy.…”
Section: Bdeu and Bayesian Model Selectionmentioning
confidence: 99%
See 1 more Smart Citation
“…, and A × B, respectively; the computation for {0, 1} n can be extended to those for A n , B n , and (A × B) n , respectively. For example, for A = {0, 1, · · · , α − 1} with α = 2, constants a, b and occurrences c, n − c are replaced by a(x) and c(x), respectively, for x = 0, 1, · · · , α − 1; thus, the extended formula can be expressed by [7], [14] Q…”
Section: Forest Learning From Complete Datamentioning
confidence: 99%