This article develops a unifying framework for allocating the aggregate capital of a financial firm to its business units. The approach relies on an optimization argument, requiring that the weighted sum of measures for the deviations of the business unit's losses from their respective allocated capitals be minimized. The approach is fair insofar as it requires capital to be close to the risk that necessitates holding it. The approach is additionally very flexible in the sense that different forms of the objective function can reflect alternative definitions of corporate risk tolerance. Owing to this flexibility, the general framework reproduces several capital allocation methods that appear in the literature and allows for alternative interpretations and possible extensions.
Convex risk measures were introduced by Deprez and Gerber (1985). Here the problem of allocating risk capital to subportfolios is addressed, when aggregate capital is calculated by a convex risk measure. The Aumann-Shapley value is proposed as an appropriate allocation mechanism. Distortion-exponential measures are discussed extensively and explicit capital allocation formulas are obtained for the case that the risk measure belongs to this family. Finally the implications of capital allocation with a convex risk measure for the stability of portfolios are discussed.
In a quantitative model with uncertain inputs, the uncertainty of the output can be summarized by a risk measure. We propose a sensitivity analysis method based on derivatives of the output risk measure, in the direction of model inputs. This produces a global sensitivity measure, explicitly linking sensitivity and uncertainty analyses. We focus on the case of distortion risk measures, defined as weighted averages of output percentiles, and prove a representation of the sensitivity measure that can be evaluated on a Monte Carlo sample, as a weighted average of gradients over the input space. When the analytical model is unknown or hard to work with, nonparametric techniques are used for gradient estimation. This process is demonstrated through the example of a nonlinear insurance loss model. Furthermore, the proposed framework is extended in order to measure sensitivity to constant model parameters, uncertain statistical parameters, and random factors driving dependence between model inputs.
Sensitivity analysis is an important component of model building, interpretation and validation. A model comprises a vector of random input factors, an aggregation function mapping input factors to a random output, and a (baseline) probability measure. A risk measure, such as Value-at-Risk and Expected Shortfall, maps the distribution of the output to the real line. As is common in risk management, the value of the risk measure applied to the output is a decision variable. Therefore, it is of interest to associate a critical increase in the risk measure to specific input factors. We propose a global and model-independent framework, termed 'reverse sensitivity testing', comprising three steps: (a) an output stress is specified, corresponding to an increase in the risk measure(s); (b) a (stressed) probability measure is derived, minimising the Kullback-Leibler divergence with respect to the baseline probability, under constraints generated by the output stress; (c) changes in the distributions of input factors are evaluated. We argue that a substantial change in the distribution of an input factor corresponds to high sensitivity to that input and introduce a novel sensitivity measure to formalise this insight. Implementation of reverse sensitivity testing in a Monte-Carlo setting can be performed on a single set of input/output scenarios, simulated under the baseline model. Thus the approach circumvents the need for additional computationally expensive evaluations of the aggregation function. We illustrate the proposed approach through a numerical example of a simple insurance portfolio and a model of a London Insurance Market portfolio used in industry.
This is the accepted version of the paper.This version of the publication may differ from the final published version. Permanent repository link: City Research OnlineElectronic copy available at: http://ssrn.com/abstract=1486317Electronic copy available at: http://ssrn.com/abstract=1486317Failure probability under parameter uncertainty * R. Gerrard A. Tsanakas † ‡ Cass Business School, City University LondonAbstract: In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decision maker is able to set the threshold at an appropriate level. This abstract situation applies for example to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the Log-Normal, Weibull and Pareto distributions), the paper shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (a) by reducing the nominal required failure probability, depending on the size of the available data set and (b) by modifying of the distribution itself that is used to calculate the risk control. Approach (a) corresponds to a frequentist/regulatory view of probability, while approach (b) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.