Abstract:We develop theoretically as well as numerically a new method, Normex, for the sum of independent heavy tailed distributed random variables, to obtain the most accurate evaluation of its entire distribution. Normex provides sharp results, whatever the number of summands and the tail index are. It is particularly suited when the Central Limit Theorem (CLT) applies but with slow convergence of the mean and with a poor approximation for the tail. Hence, it is filling up a gap in the literature by giving an appropr… Show more
“…Having an explicit formula (8) for the pdf f n of the aggregate risk S n , we can deduce its cdf F Sn integrating f n , and any risk measure based on F Sn , as e.g. the two standard risk measures VaR and TVaR.…”
Section: Analytical Resultsmentioning
confidence: 99%
“…when the threshold κ tends to 1), for which Feller's result (see [4]) is available. For sharper and not necessarily asymptotic results, computations could be done using the Normex method (see [8]).…”
Section: Independent Pareto Rv's Case With Asymptotic Threshold (κ 1)mentioning
We propose a new approach to analyse the effect of diversification on a portfolio of risks. By means of mixing techniques, we provide an explicit formula for the probability density function of the portfolio. These techniques allow to compute analytically risk measures as VaR or TVaR, and consequently the associated diversification benefit. The explicit formulas constitute ideal tools to analyse the properties of risk measures and diversification benefit. We use standard models, which are popular in the reinsurance industry, Archimedean survival copulas and heavy tailed marginals. We explore numerically their behavior and compare them to the aggregation of independent random variables, as well as of linearly dependent ones. Moreover, the numerical convergence of Monte Carlo simulations of various quantities is tested against the analytical result. The speed of convergence appears to depend on the fatness of the tail; the higher the tail index, the faster the convergence.
“…Having an explicit formula (8) for the pdf f n of the aggregate risk S n , we can deduce its cdf F Sn integrating f n , and any risk measure based on F Sn , as e.g. the two standard risk measures VaR and TVaR.…”
Section: Analytical Resultsmentioning
confidence: 99%
“…when the threshold κ tends to 1), for which Feller's result (see [4]) is available. For sharper and not necessarily asymptotic results, computations could be done using the Normex method (see [8]).…”
Section: Independent Pareto Rv's Case With Asymptotic Threshold (κ 1)mentioning
We propose a new approach to analyse the effect of diversification on a portfolio of risks. By means of mixing techniques, we provide an explicit formula for the probability density function of the portfolio. These techniques allow to compute analytically risk measures as VaR or TVaR, and consequently the associated diversification benefit. The explicit formulas constitute ideal tools to analyse the properties of risk measures and diversification benefit. We use standard models, which are popular in the reinsurance industry, Archimedean survival copulas and heavy tailed marginals. We explore numerically their behavior and compare them to the aggregation of independent random variables, as well as of linearly dependent ones. Moreover, the numerical convergence of Monte Carlo simulations of various quantities is tested against the analytical result. The speed of convergence appears to depend on the fatness of the tail; the higher the tail index, the faster the convergence.
“…It is a good news since, in practice, typically only the marginal loss distribution functions are known or statistically estimated, while the dependence structure between the losses is either completely or partially unknown." In Kratz [44], a new approach, called Normex, is developed to provide accurate estimates of high quantiles for aggregated independent heavy tailed risks. This method depends only weakly upon the sample size and gives good results for any non-negative tail index of the risks.…”
Section: When Is Value-at-risk Subadditive?mentioning
Expected shortfall (ES) has been widely accepted as a risk measure that is conceptually superior to value-at-risk (VaR). At the same time, however, it has been criticized for issues relating to backtesting. In particular, ES has been found not to be elicitable, which means that backtesting for ES is less straightforward than, for example, backtesting for VaR. Expectiles have been suggested as potentially better alternatives to both ES and VaR. In this paper, we revisit the commonly accepted desirable properties of risk measures such as coherence, comonotonic additivity, robustness and elicitability. We check VaR, ES and expectiles with regard to whether or not they enjoy these properties, with particular emphasis on expectiles. We also consider their impact on capital allocation, an important issue in risk management. We find that, despite the caveats that apply to the estimation and backtesting of ES, it can be considered a good risk measure. As a consequence, there is no sufficient evidence to justify an all-inclusive replacement of ES by expectiles in applications. For backtesting ES, we propose an empirical approach that consists of replacing ES by a set of four quantiles, which should allow us to make use of backtesting methods for VaR.
“…Several hybrid models have been proposed in such context, combining two or more densities (see e.g. [1,7,21,30,32,36,38,39,44,60]).…”
Section: Introductionmentioning
confidence: 99%
“…How many components of the hybrid model to consider and how to choose them? Since we are interested in fitting the whole distribution underlying heavy tailed data, the idea is to consider both the mean and tail behaviors, and to use limit theorems for each one (as suggested and developed analytically in [32]), in order to make the model as general as possible. Therefore, we introduce a Gaussian distribution for the mean behavior, justified by the Central Limit Theorem (CLT), and a GPD for the tail, as the Pickands theorem (see [48]) tells us that the tail of the distribution may be evaluated through a GPD above a high threshold.…”
Modelling non-homogeneous and multi-component data is a problem that challenges scientific researchers in several fields. In general, it is not possible to find a simple and closed form probabilistic model to describe such data. That is why one often resorts to non-parametric approaches. However, when the multiple components are separable, parametric modelling becomes again tractable. In this study, we propose a self-calibrating method to model multi-component data that exhibit heavy tails. We introduce a three-component hybrid distribution: a Gaussian distribution is linked to a Generalized Pareto one via an exponential distribution that bridges the gap between mean and tail behaviors. An unsupervised algorithm is then developed for estimating the parameters of this model. We study analytically and numerically its convergence. The effectiveness of the self-calibrating method is tested on simulated data, before applying it to real data from neuroscience and finance, respectively. A comparison with other standard Extreme Value Theory approaches confirms the relevance and the practical advantage of this new method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.