2017
DOI: 10.1080/03610926.2017.1342839
|View full text |Cite
|
Sign up to set email alerts
|

Estimation and influence diagnostics for zero-inflated hyper-Poisson regression model: full Bayesian analysis

Abstract: The purpose of this paper is to develop a Bayesian analysis for the zeroinflated hyper-Poisson model. Markov chain Monte Carlo methods are used to develop a Bayesian procedure for the model and the Bayes estimators are compared by simulation with the maximum-likelihood estimators. Regression modeling and model selection are also discussed and case deletion influence diagnostics are developed for the joint posterior distribution based on the functional Bregman divergence, which includes ψ-divergence and several… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 30 publications
0
7
0
Order By: Relevance
“…Let ω(1),,ω(Q) be a sample of size Q from π(ω|scriptD). Then, a Monte Carlo estimate of K(P,P(-i)) is given by In particular, from equation (17), a Monte Carlo estimate of K–L divergence DK-L(P,P(-i)) is obtained as Values of the Dζ(P,P(-i)) can be interpreted as the ζ -divergence of the effect of deleting of i -th case from the full data on the joint posterior distribution of ω. As pointed out by Peng and Dey, 34 Weiss, 35 and Cancho et al., 17 it may be difficult for a practitioner to judge the cutoff point of the divergence measure to determine whether a small subset of observations from the full data is influential or not. Thus, we will use the calibration proposal given by Peng and Dey…”
Section: Appendix 1 Rstan Code For Johnson Sb Regression Modelmentioning
confidence: 99%
See 4 more Smart Citations
“…Let ω(1),,ω(Q) be a sample of size Q from π(ω|scriptD). Then, a Monte Carlo estimate of K(P,P(-i)) is given by In particular, from equation (17), a Monte Carlo estimate of K–L divergence DK-L(P,P(-i)) is obtained as Values of the Dζ(P,P(-i)) can be interpreted as the ζ -divergence of the effect of deleting of i -th case from the full data on the joint posterior distribution of ω. As pointed out by Peng and Dey, 34 Weiss, 35 and Cancho et al., 17 it may be difficult for a practitioner to judge the cutoff point of the divergence measure to determine whether a small subset of observations from the full data is influential or not. Thus, we will use the calibration proposal given by Peng and Dey…”
Section: Appendix 1 Rstan Code For Johnson Sb Regression Modelmentioning
confidence: 99%
“…In this context, although not required for development, we assume that the prior distribution of α , β and τ is independent, that is, π(α,β,τ)=π(α)π(β)π(τ), with απ(α),βNp(0,Σβ) and γNq(0,Σγ), where Nr(0,Σ0) denoting the r -variate normal distribution with mean vector 0 and covariance matrix Σ0. Following some authors, see for example Cancho et al., 17 here all the hyper-parameters are specified in order to express weak-informative priors. The hyper-parameter associated with the variance of …”
Section: Bayesian Inferencementioning
confidence: 99%
See 3 more Smart Citations