2016
DOI: 10.1214/15-ba981
|View full text |Cite
|
Sign up to set email alerts
|

A New Family of Non-Local Priors for Chain Event Graph Model Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 21 publications
(20 citation statements)
references
References 29 publications
0
20
0
Order By: Relevance
“…See also Consonni et al . () for strategies to set moment prior parameters when comparing binomial probabilities and Collazo and Smith () for their use in chain event graphs.…”
Section: Prior Formulation and Parsimony Propertiesmentioning
confidence: 99%
See 1 more Smart Citation
“…See also Consonni et al . () for strategies to set moment prior parameters when comparing binomial probabilities and Collazo and Smith () for their use in chain event graphs.…”
Section: Prior Formulation and Parsimony Propertiesmentioning
confidence: 99%
“…In equation (2.6) g determines the prior separation in the binomial success probabilities across components and the prior informativeness. As discussed in Section 2.3 large g leads to informative priors with little separation across components, and there is a range of g-values that can be interpreted as being minimally informative in a fairly robust manner across k. See also Consonni et al (2013) for strategies to set moment prior parameters when comparing binomial probabilities and Collazo and Smith (2016) for their use in chain event graphs. An issue in expressions (2.4) and (2.6) is the computation of the normalizing constant C k : a non-trivial expectation of a product of quadratic forms.…”
Section: Choice Of Penalty Functionmentioning
confidence: 99%
“…Another possible research stream is to explore causal and explanatory analyses using graphical models such as Bayesian Networks (Pearl, 2009; Schenekenberg et al, 2011) and Chain Event Graphs (Smith & Anderson, 2008;Collazo & Smith, 2015). Finally, in a future study it will also be very interesting to examine the impact of different layers of hidden neurons defined for the ANN algorithm on the results.…”
Section: Resultsmentioning
confidence: 99%
“…The first component on the right hand side of the equation can be evaluated by applying the results in Equation ( 19), and the second component can be simplified to Equation (32). By doing this, we have the expression in Equation (31).…”
Section: Composite Singular and Stochastic Manipulations Under Routine Interventionmentioning
confidence: 99%
“…Now, we run the algorithm for α = 0.001, α = 0.01, α = 0.1, α = 1, α = 3, α = 5, where α is the prior parameter representing the number of phantom units entering the root node. We assess the resulted models in terms of situational errors [31] and MAP scores. The situational error (The total situational error of a tree is evaluated as γ(T ) = ∑ v∈V T ||θ * v − θv || 2 ) for a situation v measures the Euclidean distance between the true conditional probabilities θ * v and the mean posterior probabilities θv estimated on the best scoring model.…”
Section: A Comparison Studymentioning
confidence: 99%