2019
DOI: 10.21629/jsee.2019.03.09
|View full text |Cite
|
Sign up to set email alerts
|

Learning Bayesian networks by constrained Bayesian estimation

Abstract: Bayesian networks (BNs) have become increasingly popular in recent years due to their wide-ranging applications in modeling uncertain knowledge. An essential problem about discrete BNs is learning conditional probability table (CPT) parameters. If training data are sparse, purely data-driven methods often fail to learn accurate parameters. Then, expert judgments can be introduced to overcome this challenge. Parameter constraints deduced from expert judgments can cause parameter estimates to be consistent with … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…The Bayesian network is an extension of the Bayesian theory proposed by Pearl of the University of California in the late 1980s, which is used to solve the lack of an effective reasoning algorithm for multivariate joint probability density in probabilistic reasoning methods 36–39 …”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The Bayesian network is an extension of the Bayesian theory proposed by Pearl of the University of California in the late 1980s, which is used to solve the lack of an effective reasoning algorithm for multivariate joint probability density in probabilistic reasoning methods 36–39 …”
Section: Methodsmentioning
confidence: 99%
“…The Bayesian network is an extension of the Bayesian theory proposed by Pearl of the University of California in the late 1980s, which is used to solve the lack of an effective reasoning algorithm for multivariate joint probability density in probabilistic reasoning methods. [36][37][38][39] Bayes network is expressed as the form of probability 𝑄(𝐺, 𝑃). G is a directed acyclic graph with m nodes (random variables), and the edges between nodes represent the dependence relationship between the random variables.…”
Section: Bayesian Network Reasoningmentioning
confidence: 99%
“…In the end, the BN parameters are obtained using the MAP formula. Gao [21] proposes a constrained Bayesian estimation (CBE) algorithm that enhances learning accuracy by introducing expert criteria. Di [22] proposes a constrained adjusted MAP (CaMAP) algorithm by choosing a reasonable equivalent sample size.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, study [19] only applies to monotonic constraints, and study [20] only applies to approximate equality constraints. The prior distribution of [21] is set to be the BDeu priors rather than the transferred priors, which are more meaningful. When no parameter constraints are available, the CaMAP [22] is inferior to the MAP.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, for the incomplete data, an improved expectation maximum algorithm was proposed by combining with the CMAP method. The paper [35] proposed a constrained Bayesian estimation approach to learn parameters by incorporating constraints deduced from expert judgments and Dirichlet priors.…”
Section: Introductionmentioning
confidence: 99%