1994
DOI: 10.1016/0888-613x(94)90019-1
|View full text |Cite
|
Sign up to set email alerts
|

Efficient inference in Bayes networks as a combinatorial optimization problem

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
63
0
1

Year Published

1996
1996
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 88 publications
(66 citation statements)
references
References 5 publications
0
63
0
1
Order By: Relevance
“…Our second practical application of local BNs concerns DC (Dechter 1996;Li and D'Ambrosio 1994;Zhang 1998). MSBNs are well established in the probabilistic Table 5 The computation needed in DC to process five localized queries in the original CHD BN in Fig.…”
Section: Lemma 6 Given As Input a Bn D The Output {Bmentioning
confidence: 99%
See 1 more Smart Citation
“…Our second practical application of local BNs concerns DC (Dechter 1996;Li and D'Ambrosio 1994;Zhang 1998). MSBNs are well established in the probabilistic Table 5 The computation needed in DC to process five localized queries in the original CHD BN in Fig.…”
Section: Lemma 6 Given As Input a Bn D The Output {Bmentioning
confidence: 99%
“…One approach, called multiply sectioned Bayesian networks (MSBNs) (Xiang 1996(Xiang , 2002Xiang and Jensen 1999;Xiang et al 2006Xiang et al , 2000Xiang et al , 1993, performs inference in sections of a BN. A second approach, called direct computation (DC) (Dechter 1996;Li and D'Ambrosio 1994;Zhang 1998), answers queries directly in the original BN. Although we focus on a third approach called join tree propagation, in which inference is conducted in a join tree (JT) (Pearl 1988;Shafer 1996) constructed from the DAG of a BN, our work has practical applications in all three approaches.…”
mentioning
confidence: 99%
“…Repeating the same procedure for all variables except X 5 would lead us to the desired result. This procedure is known as the variable elimination algorithm [16][17][18]. Thus, the idea that distinguishes this approach from the naïve approach outlined above is to organize the operations among the conditional distributions in the network, so that we do not manipulate distributions that are unnecessarily large.…”
Section: Inferencementioning
confidence: 99%
“…A variable is barren [13], if it is neither an evidence nor a target variable and it only has barren descendants. Probabilistic inference can be conducted directly in the original BN [6,9,12,13,17,18,19]. It can also be performed in a join tree [3,5,7,8,14].…”
Section: Probabilistic Inferencementioning
confidence: 99%
“…A second approach to BN inference is direct computation (DC), which performs inference directly in a BN. The classical DC algorithms are variable elimination (VE) [17,18,19], arc-reversal (AR) [9,13] and symbolic probabilistic inference (SPI) [6,12]. The experimental results provided by Zhang [17] indicate that VE is more efficient than the classical JTP methods when updating twenty or less non-evidence variables, given a set of twenty or fewer evidence variables.…”
Section: Introductionmentioning
confidence: 99%