Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Catastrophe loss modeling has enormous relevance for various insurance companies due to the huge loss potential. In practice, geophysical-meteorological models are widely used to model these risks. These models are based on the simulation of meteorological and physical parameters that cause natural events and evaluate the corresponding effects on the insured exposure of a certain company. Due to their complexity, these models are often operated by external providers—at least seen from the perspective of a variety of insurance companies. The outputs of these models can be made available, for example, in the form of event loss tables, which contain different statistical characteristics of the simulated events and their caused losses relative to the exposure. The integration of these outputs into the internal risk model framework is fundamental for a consistent treatment of risks within the companies. The main subject of this work is the formulation of a performant resimulation algorithm of given event loss tables, which can be used for this integration task. The newly stated algorithm is based on cluster analysis techniques and represents a time-efficient way to perform sensitivities and scenario analyses.
Catastrophe loss modeling has enormous relevance for various insurance companies due to the huge loss potential. In practice, geophysical-meteorological models are widely used to model these risks. These models are based on the simulation of meteorological and physical parameters that cause natural events and evaluate the corresponding effects on the insured exposure of a certain company. Due to their complexity, these models are often operated by external providers—at least seen from the perspective of a variety of insurance companies. The outputs of these models can be made available, for example, in the form of event loss tables, which contain different statistical characteristics of the simulated events and their caused losses relative to the exposure. The integration of these outputs into the internal risk model framework is fundamental for a consistent treatment of risks within the companies. The main subject of this work is the formulation of a performant resimulation algorithm of given event loss tables, which can be used for this integration task. The newly stated algorithm is based on cluster analysis techniques and represents a time-efficient way to perform sensitivities and scenario analyses.
A comprehensive model for cyber risk based on marked point processes and its applications to insuranceAfter scrutinizing technical, legal, financial, and actuarial aspects of cyber risk, a new approach for modelling cyber risk using marked point processes is proposed. Key covariates, required to model frequency and severity of cyber claims, are identified. The presented framework explicitly takes into account incidents from malicious untargeted and targeted attacks as well as accidents and failures. The resulting model is able to include the dynamic nature of cyber risk, while capturing accumulation risk in a realistic way. The model is studied with respect to its statistical properties and applied to the pricing of cyber insurance and risk measurement. The results are illustrated in a simulation study.An optimal reinsurance simulation model for non - life insurance in the Solvency II frameworkIn this paper, we propose an approach to explore reinsurance optimization for a non-life multi-line insurer through a simulation model that combines alternative reinsurance treaties. Based on the Solvency II framework, the model maximises both solvency ratio and portfolio performance under user-defined constraints. Data visualisation helps understanding the numerical results and, together with the concept of the Pareto frontier, supports the selection of the optimal reinsurance program. We show in the case study that the methodology can be easily restructured to deal with multi-objective optimization, and, finally, the selected programs from each proposed problem are compared.Premium rating without lossesIn insurance and even more in reinsurance it occurs that about a risk you only know that it has suffered no losses in the past, e.g. seven years. Some of these risks are furthermore such particular or novel that there are no similar risks to infer the loss frequency from. In this paper we propose a loss frequency estimator that copes with such situations, by just relying on the information coming from the risk itself: the “amended sample mean”. It is derived from a number of practice-oriented first principles and turns out to have desirable statistical properties. Some variants are possible, enabling insurers to align the method to their preferred business strategy, by trading off between low initial premiums for new business and moderate premium increases after a loss for renewal business. We further give examples where it is possible to assess the average loss from some market or portfolio information, such that overall one has an estimator of the risk premium.Socio - economic differentiation in experienced mortality modelling and its pricing implicationsIn recent years, increasing availability and quality of the portfolio data enables the life insurers to render fair and flexible pricing based on individual-level socio-economic attributes. Yet, many insurers price based on the “experience factor”; portfolio-specific mortality divided by population mortality. We incorporate the logistic regression in the experience mortality model by Plat (Insurance 45:123–132, 2009) and examine the effect of the differentiating factor(s) on the level and trend of mortality. The regression model accounts for socio-economic factors, such as salary, in the portfolio and constructs the corresponding differentiated experience factors. To address the varying uncertainty in each class of the differentiated experienced mortality, we provide the price of a simple survival benefit for a cohort with and without differentiation. We employ the EIOPA risk-margin price to examine how the differentiated mortality can be reflected in the required risk-loading, using the salary as an example differentiator. Further, we extend the risk margin price to a “time-consistent” price to address the considerable likelihood of the middle-time dynamics of the experience mortality in long-dated contracts. The Least Square Monte Carlo (LSMC) method serves as the numerical method to calculate the conditional operators in the time-consistent price. We find that differentiation is significant for different salary classes. For example, for the 40 years old male cohort, salary differentiation can result in around 7% discount for the low salary class and 7.9% surcharge for the high salary class.Bounds on Spearman’s rho when at least one random variable is discreteSpearman’s rho is one of the most popular dependence measures used in practice to describe the association between two random variables. However, in case of at least one random variable being discrete, Spearman’s correlations are often bounded and restricted to a sub-interval of [−1,1]. Hence, small positive values of Spearman’s rho may actually support a strong positive dependence when getting close to its highest attainable value. Similarly, slight negative values of Spearman’s rho can actually mean a strong negative dependence. In this paper, we derive the best-possible upper and lower bounds for Spearman’s rho when at least one random variable is discrete. We illustrate the obtained lower and upper bounds in some situations of practical relevance.A general framework for analysing the mortality experience of a large portfolio of lives: with an application to the UK universities superannuation schemeWe propose a general framework that can be used to analyse the mortality experience of a large portfolio of lives. The objective of the framework is to provide a firm evidence base to support the setting of future mortality assumptions for the portfolio as a whole or subgroup-by-subgroup. The framework is developed in tandem with an analysis of the mortality of pensioners in the Universities Superannuation Scheme (USS), the largest funded pension scheme in the UK and one with a highly educated and very homogeneous membership. The USS experience was compared with English mortality subdivided into deprivation deciles using the Index of Multiple Deprivation (IMD). USS was found to have significantly lower mortality rates than even IMD-10 (the least deprived of the English deciles), but with similar mortality improvement rates to that decile over the period 2005–2016. Higher pensions were found to predict lower mortality, but only weakly so, and only for persons who retired on the first day of a month (mostly from active service). We found that other potential covariates derived from an individual’s post/zip code (geographical region and the IMD associated with their local area) typically had no explanatory power. This lack of dependence is an important conclusion of the USS-specific analysis and contrasts with others that consider the mortality of more heterogeneous scheme memberships. Although the key findings are likely to be particular to USS, we argue that our analytical framework will be useful for other large pension schemes and life annuity providers.Efficient evaluation of alternative reinsurance strategies using control variatesIn this short communication, we present a new, simple control-variate Monte Carlo procedure for enhancing the evaluation accuracy of alternative reinsurance strategies that an insurance company might adopt.Best upper and lower bounds on Spearman’s rho for zero - inflated continuous variables and their application to insuranceIn this note, we establish the best lower and upper bounds on Spearman’s rho for zero-inflated continuous random variables studied by Pimentel (Kendall’s Tau and Spearman’s Rho for Zero Inflated Data (Ph.D. dissertation). Western Michigan University, Kalamazoo, 2009). The proposed bounds are explicitly expressed in terms of the respective probability masses at the origin. As illustrated in an example based on insurance data, these bounds are useful in practice when interpreting the values of Spearman’s rho.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.