2017
DOI: 10.1080/03610918.2016.1152365
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian inference for generalized extreme value distributions via Hamiltonian Monte Carlo

Abstract: In this paper we propose to evaluate and compare Markov chain Monte Carlo (MCMC) methods to estimate the parameters in a generalized extreme value model. We employed the Bayesian approach using traditional Metropolis-Hastings methods, Hamiltonian Monte Carlo (HMC) and Riemann manifold HMC (RMHMC) methods to obtain the approximations to the posterior marginal distributions of interest. Applications to real datasets of maxima illustrate illustrate how HMC can be much more efficient computationally than tradition… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 7 publications
0
6
0
Order By: Relevance
“…The gradient is obtained through automatic differentiation (Carpenter et al (2015)) in STAN. The HMC sampler has shown good performance in several other cases (Hajian (2007); Pakman and Paninski (2014); Hartmann and Ehlers (2017)). We provide a short introduction to HMC in Appendix B and refer to Neal et al (2011) or Betancourt (2017) for more details.…”
Section: Hamiltonian Monte Carlomentioning
confidence: 98%
“…The gradient is obtained through automatic differentiation (Carpenter et al (2015)) in STAN. The HMC sampler has shown good performance in several other cases (Hajian (2007); Pakman and Paninski (2014); Hartmann and Ehlers (2017)). We provide a short introduction to HMC in Appendix B and refer to Neal et al (2011) or Betancourt (2017) for more details.…”
Section: Hamiltonian Monte Carlomentioning
confidence: 98%
“…To sample β, τ 2 , a, z in PGM, we use a block of HMC and Gibbs sampler. The HMC technique requires fewer iterations to explore the parameter space and converges rapidly to the target distribution (Hartmann and Ehlers, 2017). Therefore, we implement the HMC and Gibbs sampler with 5000 iterations and 1000 burn-in.…”
Section: Simulationsmentioning
confidence: 99%
“…The GEV distribution comprises into a single form all three Extreme Value (EV) distributions: Gumbel (EV-I, ξ = 0), Fréchet (EV-II, ξ > 0), and Weibull (EV-III, ξ < 0). The density function g(z) and cumulative distribution function G(z) of the GEV distribution can be written as [23], [8]:…”
Section: The Gev Distributionmentioning
confidence: 99%
“…Besides the general algorithms of MCMC, namely the Gibbs [19], the Metropolis [20,21] and the Metropolis-Hastings [22], for the purpose of estimating the parameters in GEV distributions, Hamiltonian Monte Carlo (HMC) algorithms are more efficient. In addition, the HMC parameter estimation is relatively robust and much faster [23]. In extreme value analysis with GEV models, avoidance of random-walk behavior is one of the major advantages of HMC [24].…”
Section: Introductionmentioning
confidence: 99%