2018
DOI: 10.1016/j.jmva.2018.03.012
|View full text |Cite
|
Sign up to set email alerts
|

Trace-class Monte Carlo Markov chains for Bayesian multivariate linear regression with non-Gaussian errors

Abstract: Let π denote the intractable posterior density that results when the likelihood from a multivariate linear regression model with errors from a scale mixture of normals is combined with the standard non-informative prior. There is a simple data augmentation algorithm (based on latent data from the mixing density) that can be used to explore π. Let h(·) and d denote the mixing density and the dimension of the regression model, respectively. Hobert et al. (2016) have recently shown that, if h converges to 0 at t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…Also, Jung & Hobert () showed that, when d =1 and the mixing density is inverted Gamma with shape parameter larger than 1/2, the Markov operator associated with the DA Markov chain is a trace‐class operator, which implies that the corresponding chain converges at a geometric rate. (Qin & Hobert () provide a substantial generalization of the results in Jung & Hobert (). )…”
Section: Introductionmentioning
confidence: 58%
“…Also, Jung & Hobert () showed that, when d =1 and the mixing density is inverted Gamma with shape parameter larger than 1/2, the Markov operator associated with the DA Markov chain is a trace‐class operator, which implies that the corresponding chain converges at a geometric rate. (Qin & Hobert () provide a substantial generalization of the results in Jung & Hobert (). )…”
Section: Introductionmentioning
confidence: 58%
“…While compact operators were once thought to be rare in MCMC problems with uncountable state spaces (Chan and Geyer, 1994), a string of recent results suggests that trace-class DA Markov operators are not at all rare (see e.g. Qin and Hobert, 2018;Chakraborty and Khare, 2017;Choi and Román, 2017;Pal et al, 2017). Furthermore, by exploiting a simple trick, we are able to broaden the applicability of our method well beyond DA algorithms.…”
Section: Introductionmentioning
confidence: 92%
“…Moreover, when h(·) is a standard pdf on R + , these univariate densities are often members of a standard parametric family. The following proposition about the resulting DA operator is proved in Qin and Hobert (2018).…”
Section: Bayesian Linear Regression Model With Non-gaussian Errorsmentioning
confidence: 99%
See 1 more Smart Citation
“…This is particularly true for the so-called Data Augmentation (DA) algorithm, which is a widely used technique for constructing Markov chains by introducing unobserved/latent random variables. In this context, often, (a) the transition density can only be expressed as an intractable high-dimensional integral, and/or (b) the stationary density is only available up to an unknown normalizing constant 1 , see Albert and Chib (1993); ; Roy (2012); Polson, Scott, and Windle (2013); Choi and Hobert (2013); Hobert et al (2015); Qin and Hobert (2016); Pal et al (2017) to name just a few.…”
Section: Introductionmentioning
confidence: 99%