2016
DOI: 10.1007/s00354-016-0002-y
|View full text |Cite
|
Sign up to set email alerts
|

On Model Selection, Bayesian Networks, and the Fisher Information Integral

Abstract: Abstract. We study BIC-like model selection criteria and in particular, their refinements that include a constant term involving the Fisher information matrix. We observe that for complex Bayesian network models, the constant term is a negative number with a very large absolute value that dominates the other terms for small and moderate sample sizes. We show that including the constant term degrades model selection accuracy dramatically compared to the standard BIC criterion where the term is omitted. On the o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 18 publications
(23 reference statements)
0
1
0
Order By: Relevance
“…A practical workaround is to use a decomposable score, i.e., a score that is additive in terms of the network's nodes and depends only on the parents of the index node. This approach is very close to classical model selection in statistics (Zou and Roos 2017).…”
Section: Learning Algorithmmentioning
confidence: 78%
“…A practical workaround is to use a decomposable score, i.e., a score that is additive in terms of the network's nodes and depends only on the parents of the index node. This approach is very close to classical model selection in statistics (Zou and Roos 2017).…”
Section: Learning Algorithmmentioning
confidence: 78%