Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
The seminar is chaired by Torsten Kleinow and Griselda Deelstra.Identifying the determinants of lapse rates in life insurance: an automated Lasso approachLapse risk is a key risk driver for life and pensions business with a material impact on the cash flow profile and the profitability. The application of data science methods can replace the largely manual and time-consuming process of estimating a lapse model that reflects various contract characteristics and provides best estimate lapse rates, as needed for Solvency II valuations. In this paper, we use the Lasso method which is based on a multivariate model and can identify patterns in the data set automatically. To identify hidden structures within covariates, we adapt and combine recently developed extended versions of the Lasso that apply different sub-penalties for individual covariates. In contrast to random forests or neural networks, the predictions of our lapse model remain fully explainable, and the coefficients can be used to interpret the lapse rate on an individual contract level. The advantages of the method are illustrated based on data from a European life insurer operating in four countries. We show how structures can be identified efficiently and fed into a highly competitive, automatically calibrated lapse model.Phase-type representations of stochastic interest rates with applications to life insuranceThe purpose of the present paper is to incorporate stochastic interest rates into a matrix-approach to multi-state life insurance, where formulas for reserves, moments of future payments and equivalence premiums can be obtained as explicit formulas in terms of product integrals or matrix exponentials. To this end we consider the Markovian interest model, where the rates are piecewise deterministic (or even constant) in the different states of a Markov jump process, and which is shown to integrate naturally into the matrix framework. The discounting factor then becomes the price of a zero-coupon bond which may or may not be correlated with the biometric insurance process. Another nice feature about the Markovian interest model is that the price of the bond coincides with the survival function of a phase-type distributed random variable. This, in particular, allows for calibrating the Markovian interest rate models using a maximum likelihood approach to observed data (prices) or to theoretical models like e.g. a Vasiček model. Due to the denseness of phase-type distributions, we can approximate the price behaviour of any zero-coupon bond with interest rates bounded from below by choosing the number of possible interest rate values sufficiently large. For observed data models with few data points, lower dimensions will usually suffice, while for theoretical models the dimensionality is only a computational issue.What to offer if consumers do not want what they need? A simultaneous evaluation approach with an application to retirement savings productsStandard economic models of rational decision making provide information on how people should decide. In practice, human decisions are influenced by numerous behavioral patterns that lead to systematic deviations from rationally optimal behavior. In the context of retirement savings, this can result in substantial pension gaps, and hence in a reduction of the standard of living in the retirement phase. The aim of this work is to introduce a general framework to (simultaneously) assess and evaluate the objectively rational utility and the subjectively perceived attractiveness. We illustrate the approach by means of an application to retirement savings products. Such a combined approach can help to identify or design retirement savings products that create a high (albeit not the maximum possible) objective utility while at the same time being subjectively of high (albeit not maximum possible) attractiveness. We argue that a focus on such products might lead to improved consumer decisions compared to observed decisions that are often driven by subjective attractiveness (resulting in rather low objective utility).Optimal insurance for a prudent decision maker under heterogeneous beliefsIn this paper we extend some of the results in the literature on optimal insurance under heterogeneous beliefs in the presence of the no-sabotage condition, by allowing the likelihood ratio function to be non-monotone. Under the assumption of prudence and a mild smoothness condition on the likelihood ratio function, we first partition the whole domain of loss into disjoint regions and then obtain an explicit parametric form for the optimal indemnity function over each piece, by resorting to the marginal indemnity function formulation. The case where there exists belief singularity between the decision maker and the insurer is also studied. As an illustration, we consider a special case of our setting in which the premium principle is a distortion premium principle. We then obtain a closed-form characterization of the optimal indemnity for the cases where premia are determined by Value-at-Risk and Tail Value-at-Risk. Our study complements the literature and provides new insights into several similar problems.Application of machine learning methods to predict drought cost in FranceThis paper addresses the prediction of the total damage costs brought on by a drought episode under the French “Régime de Catastrophes Naturelles”. Due to the specificity of this natural disaster compensation scheme, an early prediction of the cost of a disaster is needed to improve strategic decisions. Taking advantage of the access, thanks to a partnership with the Mission Risques Naturels, to a database of natural disaster claims fed by the major French insurance companies, we combine the information of drought event claims contained in this database with meteorological and socioeconomic data to achieve a more comprehensive knowledge of the exposure. Our prediction approach relies on the comparison of different statistical models and machine learning algorithms. To improve the prediction performance, we propose an aggregation of the different models. Since the main difficulty encountered is imbalanced data as a large majority of cities are not affected by a drought event, the predictions are assessed by F1-scores and Precision and Recall curves.A resimulation framework for event loss tables based on clusteringCatastrophe loss modeling has enormous relevance for various insurance companies due to the huge loss potential. In practice, geophysical-meteorological models are widely used to model these risks. These models are based on the simulation of meteorological and physical parameters that cause natural events and evaluate the corresponding effects on the insured exposure of a certain company. Due to their complexity, these models are often operated by external providers—at least seen from the perspective of a variety of insurance companies. The outputs of these models can be made available, for example, in the form of event loss tables, which contain different statistical characteristics of the simulated events and their caused losses relative to the exposure. The integration of these outputs into the internal risk model framework is fundamental for a consistent treatment of risks within the companies. The main subject of this work is the formulation of a performant resimulation algorithm of given event loss tables, which can be used for this integration task. The newly stated algorithm is based on cluster analysis techniques and represents a time-efficient way to perform sensitivities and scenario analyses.A simulation study for multifactorial genetic disorders to quantify the impact of polygenic risk scores on critical illness insuranceWith advances in genetic research, the understanding of the genetic structure of disease and the ability to predict disease risk have been enhanced. Polygenic risk scores (PRS) have been developed to assess a person’s risk of developing any heritable disease. PRS has two primary utilities that make it particularly relevant for insurers: the ability to identify high-risk groups when using PRS independently or in combination with standard risk factors; and the ability to inform early interventions that may alter future morbidity and mortality. Using heart disease as a case study, a simulation-based model is designed that introduces polygenic risk scoring into the actuarial analysis framework and then quantifies the adverse selection due to information asymmetry introduced by PRS. Individual and parental disease liability as well as PRS were simulated under a liability threshold model. A series of validations were conducted to confirm the utility of our simulated data sets. We explored three scenarios describing how insurance applicants use their PRS results to guide their insurance purchasing decisions and calculated the increased premiums that insurers would need to change to counteract this. The accuracy of PRS has the most significant impact on premiums and the proportion of individuals who know their PRS also has a substantial impact.A market- and time-consistent extension for the EIOPA risk-marginIn this paper, we investigate market- and time-consistent valuation of life-insurance liabilities, which are long-dated by nature. To obtain a market- and time-consistent value, the “two-step market evaluation” introduced by Pelsser and Stadje (Math Finance 24:25–65, 2014) is used to evaluate a hybrid payoff with underlying hedgeable financial and (partially) unhedgeable actuarial risks. The resulting time-consistent and market-consistent (TCMC) price captures the dynamics of the risk drivers over the lifetime of the contract. We show that the EIOPA standard-formula for the risk-margin is not time-consistent, and we construct a time-consistent version of the risk-margin that captures the extra uncertainties from the process dynamics. EIOPA’s standard-formula for the Risk-Margin is compared to the TCMC price for a simple unit-linked contract and we show that the effects of time-inconsistency are increasing with maturity and are significant for long-dated contracts.A systematic literature review on sustainability issues along the value chain in insurance companies and pension fundsSustainability is now a priority issue that governments, businesses and society in general must address in the short term. In their role as major global institutional investors and risk managers, insurance companies and pension funds are strategic players in building socio-economic and sustainable development. To gain a comprehensive understanding of the current state of action and research on environmental, social and governance (ESG) factors in the insurance and pension sectors, we conduct a systematic literature review. We rely on the PRISMA protocol and analyze 1731 academic publications available in the Web of Science database up to the year 2022 and refer to 23 studies outside of scientific research retrieved from the websites of key international and European organizations. To study the corpus of literature, we introduce a classification framework along the insurance value chain including external stakeholders. The main findings reveal that risk, underwriting and investment management are the most researched areas among the nine categories considered in our framework, while claims management and sales tend to be neglected. Regarding ESG factors, climate change, as part of the environmental factor, has received the most attention in the literature. After reviewing the literature, we summarize the main sustainability issues and potential related actions. Given the current nature of the sustainability challenges for the insurance sector, this literature review is relevant to academics and practitioners alike.Individual claims reserving using activation patternsA claim often impacts not one but multiple insurance coverages provided in the contract. To account for this multivariate feature, we propose a new individual claim reserving model built around the activation of the different coverages to predict the reserve amounts. Using the framework of multinomial logistic regression, we model the activation of the different insurance coverages for each claim and their development in the following years, i.e., the activation of other coverages in the later years and all the possible payments that might result from them. As such, the model allows us to complete the individual development of the open claims in the portfolio. Using a recent automobile dataset from a major Canadian insurance company, we demonstrate that this approach generates accurate predictions of the total reserves and the reserves per insurance coverage. This analysis allows the insurer to get new insights into the dynamics of his claims reserves.Model selection with Pearson’s correlation, concentration and Lorenz curves under autocalibrationWüthrich (1) established that the Gini index is a consistent scoring rule in the class of autocalibrated predictors. This note further explores performances criteria in this class. Elementary Pearson’s correlation turns out to be consistent when restricted to autocalibrated predictors. Also, any performance measure that is minimized for predictors that are comonotonic with the true regression model is consistent under autocalibration. This provides a new proof of the consistency for Gini index. In addition, it is established that the concentration curve of the true model is the lowest possible concentration curve under autocalibration and that the same property holds true for Lorenz curve.
The seminar is chaired by Torsten Kleinow and Griselda Deelstra.Identifying the determinants of lapse rates in life insurance: an automated Lasso approachLapse risk is a key risk driver for life and pensions business with a material impact on the cash flow profile and the profitability. The application of data science methods can replace the largely manual and time-consuming process of estimating a lapse model that reflects various contract characteristics and provides best estimate lapse rates, as needed for Solvency II valuations. In this paper, we use the Lasso method which is based on a multivariate model and can identify patterns in the data set automatically. To identify hidden structures within covariates, we adapt and combine recently developed extended versions of the Lasso that apply different sub-penalties for individual covariates. In contrast to random forests or neural networks, the predictions of our lapse model remain fully explainable, and the coefficients can be used to interpret the lapse rate on an individual contract level. The advantages of the method are illustrated based on data from a European life insurer operating in four countries. We show how structures can be identified efficiently and fed into a highly competitive, automatically calibrated lapse model.Phase-type representations of stochastic interest rates with applications to life insuranceThe purpose of the present paper is to incorporate stochastic interest rates into a matrix-approach to multi-state life insurance, where formulas for reserves, moments of future payments and equivalence premiums can be obtained as explicit formulas in terms of product integrals or matrix exponentials. To this end we consider the Markovian interest model, where the rates are piecewise deterministic (or even constant) in the different states of a Markov jump process, and which is shown to integrate naturally into the matrix framework. The discounting factor then becomes the price of a zero-coupon bond which may or may not be correlated with the biometric insurance process. Another nice feature about the Markovian interest model is that the price of the bond coincides with the survival function of a phase-type distributed random variable. This, in particular, allows for calibrating the Markovian interest rate models using a maximum likelihood approach to observed data (prices) or to theoretical models like e.g. a Vasiček model. Due to the denseness of phase-type distributions, we can approximate the price behaviour of any zero-coupon bond with interest rates bounded from below by choosing the number of possible interest rate values sufficiently large. For observed data models with few data points, lower dimensions will usually suffice, while for theoretical models the dimensionality is only a computational issue.What to offer if consumers do not want what they need? A simultaneous evaluation approach with an application to retirement savings productsStandard economic models of rational decision making provide information on how people should decide. In practice, human decisions are influenced by numerous behavioral patterns that lead to systematic deviations from rationally optimal behavior. In the context of retirement savings, this can result in substantial pension gaps, and hence in a reduction of the standard of living in the retirement phase. The aim of this work is to introduce a general framework to (simultaneously) assess and evaluate the objectively rational utility and the subjectively perceived attractiveness. We illustrate the approach by means of an application to retirement savings products. Such a combined approach can help to identify or design retirement savings products that create a high (albeit not the maximum possible) objective utility while at the same time being subjectively of high (albeit not maximum possible) attractiveness. We argue that a focus on such products might lead to improved consumer decisions compared to observed decisions that are often driven by subjective attractiveness (resulting in rather low objective utility).Optimal insurance for a prudent decision maker under heterogeneous beliefsIn this paper we extend some of the results in the literature on optimal insurance under heterogeneous beliefs in the presence of the no-sabotage condition, by allowing the likelihood ratio function to be non-monotone. Under the assumption of prudence and a mild smoothness condition on the likelihood ratio function, we first partition the whole domain of loss into disjoint regions and then obtain an explicit parametric form for the optimal indemnity function over each piece, by resorting to the marginal indemnity function formulation. The case where there exists belief singularity between the decision maker and the insurer is also studied. As an illustration, we consider a special case of our setting in which the premium principle is a distortion premium principle. We then obtain a closed-form characterization of the optimal indemnity for the cases where premia are determined by Value-at-Risk and Tail Value-at-Risk. Our study complements the literature and provides new insights into several similar problems.Application of machine learning methods to predict drought cost in FranceThis paper addresses the prediction of the total damage costs brought on by a drought episode under the French “Régime de Catastrophes Naturelles”. Due to the specificity of this natural disaster compensation scheme, an early prediction of the cost of a disaster is needed to improve strategic decisions. Taking advantage of the access, thanks to a partnership with the Mission Risques Naturels, to a database of natural disaster claims fed by the major French insurance companies, we combine the information of drought event claims contained in this database with meteorological and socioeconomic data to achieve a more comprehensive knowledge of the exposure. Our prediction approach relies on the comparison of different statistical models and machine learning algorithms. To improve the prediction performance, we propose an aggregation of the different models. Since the main difficulty encountered is imbalanced data as a large majority of cities are not affected by a drought event, the predictions are assessed by F1-scores and Precision and Recall curves.A resimulation framework for event loss tables based on clusteringCatastrophe loss modeling has enormous relevance for various insurance companies due to the huge loss potential. In practice, geophysical-meteorological models are widely used to model these risks. These models are based on the simulation of meteorological and physical parameters that cause natural events and evaluate the corresponding effects on the insured exposure of a certain company. Due to their complexity, these models are often operated by external providers—at least seen from the perspective of a variety of insurance companies. The outputs of these models can be made available, for example, in the form of event loss tables, which contain different statistical characteristics of the simulated events and their caused losses relative to the exposure. The integration of these outputs into the internal risk model framework is fundamental for a consistent treatment of risks within the companies. The main subject of this work is the formulation of a performant resimulation algorithm of given event loss tables, which can be used for this integration task. The newly stated algorithm is based on cluster analysis techniques and represents a time-efficient way to perform sensitivities and scenario analyses.A simulation study for multifactorial genetic disorders to quantify the impact of polygenic risk scores on critical illness insuranceWith advances in genetic research, the understanding of the genetic structure of disease and the ability to predict disease risk have been enhanced. Polygenic risk scores (PRS) have been developed to assess a person’s risk of developing any heritable disease. PRS has two primary utilities that make it particularly relevant for insurers: the ability to identify high-risk groups when using PRS independently or in combination with standard risk factors; and the ability to inform early interventions that may alter future morbidity and mortality. Using heart disease as a case study, a simulation-based model is designed that introduces polygenic risk scoring into the actuarial analysis framework and then quantifies the adverse selection due to information asymmetry introduced by PRS. Individual and parental disease liability as well as PRS were simulated under a liability threshold model. A series of validations were conducted to confirm the utility of our simulated data sets. We explored three scenarios describing how insurance applicants use their PRS results to guide their insurance purchasing decisions and calculated the increased premiums that insurers would need to change to counteract this. The accuracy of PRS has the most significant impact on premiums and the proportion of individuals who know their PRS also has a substantial impact.A market- and time-consistent extension for the EIOPA risk-marginIn this paper, we investigate market- and time-consistent valuation of life-insurance liabilities, which are long-dated by nature. To obtain a market- and time-consistent value, the “two-step market evaluation” introduced by Pelsser and Stadje (Math Finance 24:25–65, 2014) is used to evaluate a hybrid payoff with underlying hedgeable financial and (partially) unhedgeable actuarial risks. The resulting time-consistent and market-consistent (TCMC) price captures the dynamics of the risk drivers over the lifetime of the contract. We show that the EIOPA standard-formula for the risk-margin is not time-consistent, and we construct a time-consistent version of the risk-margin that captures the extra uncertainties from the process dynamics. EIOPA’s standard-formula for the Risk-Margin is compared to the TCMC price for a simple unit-linked contract and we show that the effects of time-inconsistency are increasing with maturity and are significant for long-dated contracts.A systematic literature review on sustainability issues along the value chain in insurance companies and pension fundsSustainability is now a priority issue that governments, businesses and society in general must address in the short term. In their role as major global institutional investors and risk managers, insurance companies and pension funds are strategic players in building socio-economic and sustainable development. To gain a comprehensive understanding of the current state of action and research on environmental, social and governance (ESG) factors in the insurance and pension sectors, we conduct a systematic literature review. We rely on the PRISMA protocol and analyze 1731 academic publications available in the Web of Science database up to the year 2022 and refer to 23 studies outside of scientific research retrieved from the websites of key international and European organizations. To study the corpus of literature, we introduce a classification framework along the insurance value chain including external stakeholders. The main findings reveal that risk, underwriting and investment management are the most researched areas among the nine categories considered in our framework, while claims management and sales tend to be neglected. Regarding ESG factors, climate change, as part of the environmental factor, has received the most attention in the literature. After reviewing the literature, we summarize the main sustainability issues and potential related actions. Given the current nature of the sustainability challenges for the insurance sector, this literature review is relevant to academics and practitioners alike.Individual claims reserving using activation patternsA claim often impacts not one but multiple insurance coverages provided in the contract. To account for this multivariate feature, we propose a new individual claim reserving model built around the activation of the different coverages to predict the reserve amounts. Using the framework of multinomial logistic regression, we model the activation of the different insurance coverages for each claim and their development in the following years, i.e., the activation of other coverages in the later years and all the possible payments that might result from them. As such, the model allows us to complete the individual development of the open claims in the portfolio. Using a recent automobile dataset from a major Canadian insurance company, we demonstrate that this approach generates accurate predictions of the total reserves and the reserves per insurance coverage. This analysis allows the insurer to get new insights into the dynamics of his claims reserves.Model selection with Pearson’s correlation, concentration and Lorenz curves under autocalibrationWüthrich (1) established that the Gini index is a consistent scoring rule in the class of autocalibrated predictors. This note further explores performances criteria in this class. Elementary Pearson’s correlation turns out to be consistent when restricted to autocalibrated predictors. Also, any performance measure that is minimized for predictors that are comonotonic with the true regression model is consistent under autocalibration. This provides a new proof of the consistency for Gini index. In addition, it is established that the concentration curve of the true model is the lowest possible concentration curve under autocalibration and that the same property holds true for Lorenz curve.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.