This paper is of methodological nature, and deals with the foundations of Risk Assessment.\ud Several international guidelines have recently recommended to select appropriate/relevant Hazard Scenarios\ud in order to tame the consequences of (extreme) natural phenomena. In particular, the scenarios should\ud be multivariate, i.e., they should take into account the fact that several variables, generally not independent,\ud may be of interest. In this work, it is shown how a Hazard Scenario can be identified in terms of (i) a specific\ud geometry and (ii) a suitable probability level. Several scenarios, as well as a Structural approach, are presented,\ud and due comparisons are carried out. In addition, it is shown how the Hazard Scenario approach\ud illustrated here is well suited to cope with the notion of Failure Probability, a tool traditionally used for\ud design and risk assessment in engineering practice. All the results outlined throughout the work are based\ud on the Copula Theory, which turns out to be a fundamental theoretical apparatus for doing multivariate risk\ud assessment: formulas for the calculation of the probability of Hazard Scenarios in the general multidimensional\ud case (d 2) are derived, and worthy analytical relationships among the probabilities of occurrence of\ud Hazard Scenarios are presented. In addition, the Extreme Value and Archimedean special cases are dealt\ud with, relationships between dependence ordering and scenario levels are studied, and a counter-example\ud concerning Tail Dependence is shown. Suitable indications for the practical application of the techniques\ud outlined in the work are given, and two case studies illustrate the procedures discussed in the pape
Recent financial disasters have emphasized the need to accurately predict extreme financial losses and their consequences for the institutions belonging to a given financial market. The ability of econometric models to predict extreme events strongly relies on their flexibility to account for the highly nonlinear and asymmetric dependence patterns observed in financial time series. In this paper, we develop a new class of flexible copula models where the dependence parameters evolve according to a Markov switching generalized autoregressive score (GAS) dynamics. Maximum likelihood estimation is performed using a two-step procedure where the second step relies on the expectation-maximization algorithm. The proposed switching GAS copula models are then used to estimate the conditional value at risk and the conditional expected shortfall, measuring the impact on an institution of extreme events affecting another institution or the market. The empirical investigation, conducted on a panel of European regional portfolios, reveals that the proposed model is able to explain and predict the evolution of the systemic risk contributions over the period 1999-2015.Recent financial disasters have emphasized the need to accurately predict extreme financial losses and their consequences for institutions' financial health and, more generally, for the safety of the broader economy. As a consequence of the ever increasing level of interconnection between economies, markets, and institutions, recent financial crises feature similar ingredients. Indeed, the recent global financial crisis of 2007-2008 and the European sovereign debt crisis of 2010-2011 have been characterized by the spread of the financial turmoil from the banking sector to the whole economy, leading to sharp economic downturns and recessions. A large empirical literature focusing on the propagation mechanisms provides evidence that, during huge crisis episodes, the failure of banks and financial institutions triggers other nonfinancial institutions through the balance sheet and liquidity channels, threatening the stability of real economy (see, e.g., (Adrian & Shin, 2010;Brunnermeier, 2009;Brunnermeier & Pedersen, 2009).The ability of econometric models to account for the negative consequences for the overall financial system of such extreme events strongly relies on their flexibility to feature the highly nonlinear and asymmetric dependence structures of financial returns. Over the years, the correlation coefficient has emerged as the most natural measure of dependence. However, despite its widespread use, the correlation fails to capture the important tails behavior of the joint probability distribution (see, e.g., Embrechts, McNeil, & Straumann, 1999, 2002. Hence modeling the tail dependence and the asymmetric dependence between pairs of assets has been becoming increasingly more important in today's financial markets. Furthermore, the linear correlation coefficient as a measure of dependence is usually associated with the assumption of J Appl Econ. 2019;34:43-65...
The derivation of loss distribution from insurance data is a very interesting research topic but at the same time not an easy task. To find an analytic solution to the loss distribution may be mislading although this approach is frequently adopted in the actuarial literature. Moreover, it is well recognized that the loss distribution is strongly skewed with heavy tails and present small, medium and large size claims which hardly can be fitted by a single analytic and parametric distribution. Here we propose a finite mixture of Skew Normal distributions that provides a better characterization of insurance data. We adopt a Bayesian approach to estimate the model, providing the likelihood and the priors for the all unknow parameters; we implement an adaptive Markov Chain Monte Carlo algorithm to approximate the posterior distribution. We apply our approach to a well known Danish fire loss data and relevant risk measures, as Value-at-Risk and Expected Shortfall probability, are evaluated as well.
This paper presents the R package MCS which implements the Model Confidence Set (MCS) procedure recently developed by Hansen, Lunde, and Nason (2011). The Hansen's procedure consists on a sequence of tests which permits to construct a set of "superior" models, where the null hypothesis of Equal Predictive Ability (EPA) is not rejected at a certain confidence level. The EPA statistic tests is calculated for an arbitrary loss function, meaning that we could test models on various aspects, for example punctual forecasts. The relevance of the package is shown using an example which aims at illustrating in details the use of the functions provided by the package. The example compares the ability of different models belonging to the ARCH family to predict large financial losses. We also discuss the implementation of the ARCH-type models and their maximum likelihood estimation using the popular R package rugarch developed by Ghalanos (2014).
This paper presents the R package MCS which implements the Model Confidence Set (MCS) procedure for model comparison. The MCS procedure consists on a sequence of tests which permits to build a set of "superior" models, where the null hypothesis of Equal Predictive Ability (EPA) is not rejected at a certain confidence level. The EPA statistic test is calculated for an arbitrary loss function, meaning that we could test models on various aspects, such as for example, punctual forecasts and density evaluation.The relevance of the package is shown using an example which aims at illustrating in details the use of the provided functions. The example compares the ability of different models belonging to the GARCH family to predict large financial losses. Codes for reproducibility purposes are also reported.
In this paper we investigate the impact of news to predict extreme financial returns using high frequency data. We consider several model specifications differing for the dynamic property of the underlying stochastic process as well as for the innovation process. Since news are essentially qualitative measures, they are firstly transformed into quantitative measures which are subsequently introduced as exogenous regressors into the conditional volatility dynamics. Three basic sentiment indexes are constructed starting from three list of words defined by historical market news response and by a discriminant analysis. Models are evaluated in terms of their predictive accuracy to forecast out-of-sample Value-at-Risk of the STOXX Europe 600 sectors at different confidence levels using several statistic tests and the Model Confidence Set procedure of Hansen et al. (2011). Since the Hansen's procedure usually delivers a set of models having the same VaR predictive ability, we propose a new forecasting combination technique that dynamically weights the VaR predictions obtained by the models belonging to the optimal final set. Our results confirms that the inclusion of exogenous information as well as the right specification of the returns' conditional distribution significantly decrease the number of actual versus expected VaR violations towards one, as this is especially true for higher confidence levels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.