“…To conduct Bayesian inference for the parameters m ∈ R, φ ∈ (−1, 1) and s 2 ∈ (0, ∞) we specify commonly used prior distributions (Kastner and Frühwirth-Schnatter 2014;Alexopoulos et al 2021): m ∼ N (0, 10), (φ + 1)/2 ∼ Beta(20, 1/5) and s 2 ∼ Gam(1/2, 1/2). The posterior of interest is…”
Section: Simulated Data: a Stochastic Volatility Modelmentioning
confidence: 99%
“…To assess the proposed variance reduction methods we simulated daily log-returns of a stock for d days by using values for the parameters of the model that have been previously estimated in real data applications (Kim et al 1998;Alexopoulos et al 2021) φ = 0.98, μ = −0.85 and s = 0.15. To draw samples from the d-dimensional, d = N + 3, target posterior in (24) we first transform the parameters φ and s 2 to real-valued parameters φ and s2 by taking the logit and logarithm transformations and we assign Gaussian prior distributions by matching the first two moments of the Gaussian distributions with the corresponding moments of the beta and gamma distributions used as priors for the parameters of the original formulation.…”
Section: Simulated Data: a Stochastic Volatility Modelmentioning
We introduce a general framework that constructs estimators with reduced variance for random walk Metropolis and Metropolis-adjusted Langevin algorithms. The resulting estimators require negligible computational cost and are derived in a post-process manner utilising all proposal values of the Metropolis algorithms. Variance reduction is achieved by producing control variates through the approximate solution of the Poisson equation associated with the target density of the Markov chain. The proposed method is based on approximating the target density with a Gaussian and then utilising accurate solutions of the Poisson equation for the Gaussian case. This leads to an estimator that uses two key elements: (1) a control variate from the Poisson equation that contains an intractable expectation under the proposal distribution, (2) a second control variate to reduce the variance of a Monte Carlo estimate of this latter intractable expectation. Simulated data examples are used to illustrate the impressive variance reduction achieved in the Gaussian target case and the corresponding effect when target Gaussianity assumption is violated. Real data examples on Bayesian logistic regression and stochastic volatility models verify that considerable variance reduction is achieved with negligible extra computational cost.
“…To conduct Bayesian inference for the parameters m ∈ R, φ ∈ (−1, 1) and s 2 ∈ (0, ∞) we specify commonly used prior distributions (Kastner and Frühwirth-Schnatter 2014;Alexopoulos et al 2021): m ∼ N (0, 10), (φ + 1)/2 ∼ Beta(20, 1/5) and s 2 ∼ Gam(1/2, 1/2). The posterior of interest is…”
Section: Simulated Data: a Stochastic Volatility Modelmentioning
confidence: 99%
“…To assess the proposed variance reduction methods we simulated daily log-returns of a stock for d days by using values for the parameters of the model that have been previously estimated in real data applications (Kim et al 1998;Alexopoulos et al 2021) φ = 0.98, μ = −0.85 and s = 0.15. To draw samples from the d-dimensional, d = N + 3, target posterior in (24) we first transform the parameters φ and s 2 to real-valued parameters φ and s2 by taking the logit and logarithm transformations and we assign Gaussian prior distributions by matching the first two moments of the Gaussian distributions with the corresponding moments of the beta and gamma distributions used as priors for the parameters of the original formulation.…”
Section: Simulated Data: a Stochastic Volatility Modelmentioning
We introduce a general framework that constructs estimators with reduced variance for random walk Metropolis and Metropolis-adjusted Langevin algorithms. The resulting estimators require negligible computational cost and are derived in a post-process manner utilising all proposal values of the Metropolis algorithms. Variance reduction is achieved by producing control variates through the approximate solution of the Poisson equation associated with the target density of the Markov chain. The proposed method is based on approximating the target density with a Gaussian and then utilising accurate solutions of the Poisson equation for the Gaussian case. This leads to an estimator that uses two key elements: (1) a control variate from the Poisson equation that contains an intractable expectation under the proposal distribution, (2) a second control variate to reduce the variance of a Monte Carlo estimate of this latter intractable expectation. Simulated data examples are used to illustrate the impressive variance reduction achieved in the Gaussian target case and the corresponding effect when target Gaussianity assumption is violated. Real data examples on Bayesian logistic regression and stochastic volatility models verify that considerable variance reduction is achieved with negligible extra computational cost.
“…To conduct Bayesian inference for the parameters m ∈ R, φ ∈ (−1, 1) and s 2 ∈ (0, ∞) we specify commonly used prior distributions (Kastner and Frühwirth-Schnatter, 2014;Alexopoulos et al, 2021): m ∼ N (0, 10), (φ + 1)/2 ∼ Beta(20, 1/5) and…”
Section: Variance Reduction For Malamentioning
confidence: 99%
“…To assess the proposed variance reduction methods we simulated daily log-returns of a stock for d days by using values for the parameters of the model that have been previously estimated in real data applications (Kim et al, 1998;Alexopoulos et al, 2021) φ = 0.98, µ = −0.85 and s = 0.15. To draw samples from the d-dimensional, d = N + 3, target posterior in (24) we first transform the parameters φ and s 2 to real-valued parameters φ and s2 by taking the logit and logarithm transformations and we assign Gaussian prior distributions by matching the first two moments of the Gaussian distributions with the corresponding moments of the beta and gamma distributions used as priors for the parameters of the original formulation.…”
We introduce a general framework that constructs estimators with reduced variance for random walk Metropolis and Metropolis-adjusted Langevin algorithms. The resulting estimators require negligible computational cost and are derived in a post-process manner utilising all proposal values of the Metropolis algorithms. Variance reduction is achieved by producing control variates through the approximate solution of the Poisson equation associated with the target density of the Markov chain. The proposed method is based on approximating the target density with a Gaussian and then utilising accurate solutions of the Poisson equation for the Gaussian case. This leads to an estimator that uses two key elements: (i) a control variate from the Poisson equation that contains an intractable expectation under the proposal distribution, (ii) a second control variate to reduce the variance of a Monte Carlo estimate of this latter intractable expectation. Simulated data examples are used to illustrate the impressive variance reduction achieved in the Gaussian target case and the corresponding effect when target Gaussianity assumption is violated. Real data examples on Bayesian logistic regression and stochastic volatility models verify that considerable variance reduction is achieved with negligible extra computational cost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.