“…Two possible approaches have been proposed in the literature. The first one (see, among others, Adrian and Brunnermeier, 2011;Acharya et al, 2012;Banulescu and Dumitrescu, 2015;Corsi et al, 2015) is purely econometric and it is typically based on publicly available data on price of assets and market equity value of publicly quoted financial institutions. Generically the method consists in estimating conditional variables, such as conditional Value-at-Risk or conditional Expected Shortfall.…”
Monitoring and assessing systemic risk in financial markets is of great importance but it often requires data that are unavailable or available at a very low frequency. For this reason, systemic risk assessment with partial information is potentially very useful for regulators and other stakeholders. In this paper we consider systemic risk due to fire sales spillovers and portfolio rebalancing by using the risk metrics defined by Greenwood et al. (2015). By using a method based on the constrained minimization of the Cross Entropy, we show that it is possible to assess aggregated and single bank's systemicness and vulnerability, using only the information on the size of each bank and the capitalization of each investment asset. We also compare our approach with an alternative widespread application of the Maximum Entropy principle allowing to derive graph probability distributions and generating scenarios and we use it to propose a statistical test for a change in banks' vulnerability to systemic events. JEL codes: C45;C80;G01;G33.
“…Two possible approaches have been proposed in the literature. The first one (see, among others, Adrian and Brunnermeier, 2011;Acharya et al, 2012;Banulescu and Dumitrescu, 2015;Corsi et al, 2015) is purely econometric and it is typically based on publicly available data on price of assets and market equity value of publicly quoted financial institutions. Generically the method consists in estimating conditional variables, such as conditional Value-at-Risk or conditional Expected Shortfall.…”
Monitoring and assessing systemic risk in financial markets is of great importance but it often requires data that are unavailable or available at a very low frequency. For this reason, systemic risk assessment with partial information is potentially very useful for regulators and other stakeholders. In this paper we consider systemic risk due to fire sales spillovers and portfolio rebalancing by using the risk metrics defined by Greenwood et al. (2015). By using a method based on the constrained minimization of the Cross Entropy, we show that it is possible to assess aggregated and single bank's systemicness and vulnerability, using only the information on the size of each bank and the capitalization of each investment asset. We also compare our approach with an alternative widespread application of the Maximum Entropy principle allowing to derive graph probability distributions and generating scenarios and we use it to propose a statistical test for a change in banks' vulnerability to systemic events. JEL codes: C45;C80;G01;G33.
“…Although there have been many practical applications of testing causality (technically, Granger non-causality) of the conditional mean, especially in economics (see, for example, Geweke (1984) [1], Hoover (2001) [2], Granger et al (1986) [3], Comte and Lieberman (2000) [4], Hafner and Herwartz (2008) [5], Lee and Yang (2014) [6], Candelon and Topkavi (2016) [7], and Corsi et al (2015) [8]), there have been fewer applications of testing for causality in conditional higher moments, especially the variance or volatility associated with financial returns.…”
An early development in testing for causality (technically, Granger non-causality) in the conditional variance (or volatility) associated with financial returns was the portmanteau statistic for non-causality in the variance of Cheng and Ng (1996). A subsequent development was the Lagrange Multiplier (LM) test of non-causality in the conditional variance by Hafner and Herwartz (2006), who provided simulation results to show that their LM test was more powerful than the portmanteau statistic for sample sizes of 1000 and 4000 observations. While the LM test for causality proposed by Hafner and Herwartz (2006) is an interesting and useful development, it is nonetheless arbitrary. In particular, the specification on which the LM test is based does not rely on an underlying stochastic process, so the alternative hypothesis is also arbitrary, which can affect the power of the test. The purpose of the paper is to derive a simple test for causality in volatility that provides regularity conditions arising from the underlying stochastic process, namely a random coefficient autoregressive process, and a test for which the (quasi-) maximum likelihood estimates have valid asymptotic properties under the null hypothesis of non-causality. The simple test is intuitively appealing as it is based on an underlying stochastic process, is sympathetic to Granger's (1969Granger's ( , 1988 notion of time series predictability, is easy to implement, and has a regularity condition that is not available in the LM test.
“…There is a constantly growing number of works proposing competing or alternative approaches for estimating the networks existing between groups of financial institutions, markets, countries and (not necessarily financial) assets. Among the many contributions in this area, we refer to Billio et al (2012), Yilmaz (2014, 2015), Hautsch et al (2012Hautsch et al ( , 2013Hautsch et al ( , 2014Hautsch et al ( , 2015, Barigozzi and Brownlees (2014), Ozdagli and Weber (2015), and Corsi et al (2015), which have in common that they all refer to a financial or economic playground.…”
Section: Introductionmentioning
confidence: 99%
“…Billio et al (2016) used BIS cross-holdings, Diebold and Yilmaz (2014) developed an approach based on variance decompositions of target series, Acemoglu et al (2012) used the Input-Output matrix, as did Barigozzi and Brownlees (2014), Hautsch et al (2014) and Hautsch et al (2015). The network might also be directed and unweighted like those obtained using Granger causality in Billio et al (2012), or Granger Causality in the tails (Corsi et al, 2015), or it might even be unweighted and undirected as in the economic sector case of Caporin and Paruolo (2015).…”
Statistical tests based on Granger causality are used in finance to detect the links among financial institutions that might be interpreted as channels along which shocks spread through the financial system. The links' structure can be formally defined as a network. Despite the growing interest in the financial network topic and the increasing understanding that there might be different channels over which financial contagion spreads, the literature on combinations of financial or economic networks is still very limited. In fact, the available competing approaches to estimate networks among financial institutions suggest the coexistence of different channels for the spread of risk. It is therefore of fundamental importance to allow for the possibility of combining those alternative risk spreading channels to obtain a more complete picture of risk propagation within the financial system. Furthermore, when focusing on the approaches for estimating financial company links, we believe that Granger causality should be contrasted with methods pointing in a more clear way to the risk dimension, for instance by detecting causality among quantiles. Therefore, we utilize parametric and notparametric approaches for the estimation on the networks based on quantile causality tests.We show how to use a linear factor model as a device for estimating a combination of several networks that monitor the links across variables from different viewpoints, and we demonstrate that Granger causality test should be combined with quantile-based causality when the focus is on risk propagation. We empirically validate our two main proposals concerning the use of quantile causality to infer the network structure across a set of (financial) variables, and the model-based combination of causality networks. By using different datasets (US industrial portfolio returns, and a set of large banks and insurance companies), we first provide evidence of the different network structures that we can estimate from Granger causality and quantile causality. We then show how the networks differ across methods and over two different samples relating to the global financial crisis (2006)(2007)(2008) and to the years 2011-2015. Our results suggest that quantile causality networks are denser than Granger causality networks, a finding of relevance to systemic risk interpretation, because a denser network is indicative of a much larger set of links, and thus explains a possibly greater systemic effect of shocks.Electronic copy available at: https://ssrn.com/abstract=2909585 Abstract Causality is a widely-used concept in theoretical and empirical economics. The recent financial economics literature has used Granger causality to detect the presence of contemporaneous links between financial institutions and, in turn, to obtain a network structure. Subsequent studies combined the estimated networks with traditional pricing or risk measurement models to improve their fit to empirical data. In this paper, we provide two contributions: we show how to use a linear facto...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.