2018
DOI: 10.1111/sjos.12310
|View full text |Cite
|
Sign up to set email alerts
|

Convergence Analysis of MCMC Algorithms for Bayesian Multivariate Linear Regression with Non‐Gaussian Errors

Abstract: When Gaussian errors are inappropriate in a multivariate linear regression setting, it is often assumed that the errors are iid from a distribution that is a scale mixture of multivariate normals. Combining this robust regression model with a default prior on the unknown parameters results in a highly intractable posterior density. Fortunately, there is a simple data augmentation (DA) algorithm and a corresponding Haar PX-DA algorithm that can be used to explore this posterior. This paper provides conditions (… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
14
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 8 publications
(14 citation statements)
references
References 31 publications
0
14
0
Order By: Relevance
“…We shall refer to h as a mixing density . Heavy‐tailed error densities can be produced by choosing h with appropriate behavior near the origin . Some typical choices for h are the gamma, inverse gamma, generalized inverse Gaussian, and log‐normal densities.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…We shall refer to h as a mixing density . Heavy‐tailed error densities can be produced by choosing h with appropriate behavior near the origin . Some typical choices for h are the gamma, inverse gamma, generalized inverse Gaussian, and log‐normal densities.…”
Section: Introductionmentioning
confidence: 99%
“…Heavy-tailed error densities can be produced by choosing h with appropriate behavior near the origin. [1][2][3] Some typical choices for h are the gamma, inverse gamma, generalized inverse Gaussian, and log-normal densities. However, in principle, h can be taken to be any density on the positive half-line.…”
Section: Introductionmentioning
confidence: 99%
“…This is particularly true for the so-called Data Augmentation (DA) algorithm, which is a widely used technique for constructing Markov chains by introducing unobserved/latent random variables. In this context, often, (a) the transition density can only be expressed as an intractable high-dimensional integral, and/or (b) the stationary density is only available up to an unknown normalizing constant 1 , see Albert and Chib (1993); ; Roy (2012); Polson, Scott, and Windle (2013); Choi and Hobert (2013); Hobert et al (2015); Qin and Hobert (2016); Pal et al (2017) to name just a few.…”
Section: Introductionmentioning
confidence: 99%
“…Remark 1. Conditions (S1) & (S2) are known to be necessary for posterior propriety (Fernández and Steel, 1999;Hobert et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…There is a well-known data augmentation (DA) algorithm that can be used to explore this intractable density (Liu, 1996). Hobert et al (2016) (hereafter HJK&Q) performed convergence rate analyses of the Markov chains underlying this DA algorithm and an alternative Haar PX-DA algorithm. In this paper, we provide a substantial improvement of HJK&Q's main result.…”
Section: Introductionmentioning
confidence: 99%