1996
DOI: 10.1016/0888-613x(96)00013-8
|View full text |Cite
|
Sign up to set email alerts
|

Importance sampling algorithms for the propagation of probabilities in belief networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

1997
1997
2018
2018

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(25 citation statements)
references
References 6 publications
0
25
0
Order By: Relevance
“…First-order reliability method (FORM) (Allen and Camberos, 2009), second-order reliability method (SORM) (Allen and Camberos, 2009), importance sampling method (Cano et al, 1996), and MCS are a short list of available methods. Except for MCS, all other methods use approximation in either output variable or its distribution, which inevitably involves error especially when the governing system equation is complex.…”
Section: Surrogate Modeling -Cross-validationmentioning
confidence: 99%
“…First-order reliability method (FORM) (Allen and Camberos, 2009), second-order reliability method (SORM) (Allen and Camberos, 2009), importance sampling method (Cano et al, 1996), and MCS are a short list of available methods. Except for MCS, all other methods use approximation in either output variable or its distribution, which inevitably involves error especially when the governing system equation is complex.…”
Section: Surrogate Modeling -Cross-validationmentioning
confidence: 99%
“…The former is related to the exponential number of parameters required to fill the conditional probability table (CPT) of a variable while the latter is related to the non-polynomial time required to do belief updating in a BN [8]. Different schemes and approximate/simulation-based algorithms have been suggested that aim to tackle the belief updating issue [1,7,[12][13][14]29]. For knowledge elicitation, schemes such as Noisy-Or [27,28] and the CAST logic [6,29] have been proposed that ask a linear number of parameters from a subject matter expert and convert them into conditional probability tables (having exponential number of parameters).…”
Section: Introductionmentioning
confidence: 99%
“…Known importance sampling algorithms, as those developed by Cano et al (1996);Fung and Chang (1990); Shachter and Peot (1990) and Dagum and Luby (1997) generate conÿgurations simulating values for each non-observed variable using its conditional distribution, and instantiating each observed variable in X E to the evidence e. This may lead to bad situations. This happens when most of the weights are low and a few of them are high.…”
Section: Computing a Sampling Distributionmentioning
confidence: 99%
“…If we want to calculate the 'a posteriori' probability for all the variables in the network, then for each case of each variable, a sampling distribution has to be computed and a sample drawn from it. This is the solution adopted in Cano et al (1996). Dagum and Luby (1997) use a slightly di erent method; they estimate p(x k ; e) and p(e) with di erent samples in order to obtain and approximation for p(x k |e).…”
Section: Importance Sampling In Bayesian Networkmentioning
confidence: 99%