1990
DOI: 10.1002/net.3230200510
|View full text |Cite
|
Sign up to set email alerts
|

A randomized approximation algorithm for probabilistic inference on bayesian belief networks

Abstract: Researchers in decision analysis and artificial intelligence (AI) have used Bayesian belief networks to build probabilistic expert systems. Using standard methods drawn from the theory of computational complexity, workers in the field have shown that the problem of probabilistic inference in belief networks is difficult and almost certainly intractable. We have developed a randomized approximation scheme, BN-RAS, for doing probabilistic inference in belief networks. The algorithm can, in many circumstances, pe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

1990
1990
2020
2020

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 50 publications
(26 citation statements)
references
References 16 publications
0
26
0
Order By: Relevance
“…Techniques exploiting other features o f graphs will be required in realistic cases. For other recent work in the field, see Beinlich et aly 1989;Chavez, 1989;Chavez and Cooper, 1989a;Chavez and Cooper, 1989b;Lauritzen and Spiegelhalter, 1988. Employing these algorithms may still be hard in practice, even if not hard in principle; we have seen no argument on this score, one way or the other. But Harman's argument proceeds from the claim that belief revision using probabilities is hard in principle, not from the claim that it is hard in practice.…”
Section: Figurementioning
confidence: 77%
“…Techniques exploiting other features o f graphs will be required in realistic cases. For other recent work in the field, see Beinlich et aly 1989;Chavez, 1989;Chavez and Cooper, 1989a;Chavez and Cooper, 1989b;Lauritzen and Spiegelhalter, 1988. Employing these algorithms may still be hard in practice, even if not hard in principle; we have seen no argument on this score, one way or the other. But Harman's argument proceeds from the claim that belief revision using probabilities is hard in principle, not from the claim that it is hard in practice.…”
Section: Figurementioning
confidence: 77%
“…Another class of sampling methods is based on Markov Chain Monte Carlo (MCMC) simulation [23,128]. Procedurally, samples in MCMC are generated by first starting with a random sample x 0 that is consistent with evidence e. A sample x i is then generated based on sample x i−1 by choosing a new value of some nonevidence variable X by sampling from the distribution Pr(X|x i − X).…”
Section: Inference By Stochastic Samplingmentioning
confidence: 99%
“…In one belief network, we observed during repeated simulations that the MST algorithm got trapped in a portion of the Markov state space and did not converge; in [6] we analyze why such traps occur and we offer some suggestions for avoiding traps. We also derived a theoretical analysis of the worst-case expected convergence of the MST algorithm [4], and in [24] we prove a tight worst-case bound. We developed a derivative of MST called BN-RAS, and in [2] we evaluate the convergence of BN-RAS on two belief networks.…”
Section: Approximation Algorithmsmentioning
confidence: 99%