The cost of insurance is rising all over the world, with frequent proposed policies and fraud playing a role. In recent years, AI techniques have been increasingly used for numerous insurance tasks. However, AI researchers frequently struggle technically to understand the vast and sophisticated domain knowledge as well as rapidly changing literature. Although several studies have already been published on the use of artificial intelligence in the context of specific insurance tasks, our work provides a unified general survey of the existing literature, incorporating various AI techniques into all major insurance tasks by providing a systematic literature survey of the rapidly growing literature on artificial intelligence in the insurance industry. From 2017 to the present, we focus our research on machine learning, big data, blockchain, data mining, chatbot theory, and its application in fraud detection, insurance policy, claim prediction, rick prediction, and other areas to systematically introduce existing work in the insurance sector using AI techniques.
Traditional machine learning metrics, such as precision, recall, accuracy, Mean Squared Error (MSE) and Root Mean Square Error (RMSE) among others, do not provide sufficient confidence for practitioners with regard to the performance and dependability of their models. Therefore, there is a need to provide an explanation of the model to machine-learning professionals to establish trust in the model prediction and provide a human-understandable explanation to domain specialists. This was achieved by developing a model-independent and locally accurate explanation set. This set makes the conclusions of the primary models understandable to anyone in the insurance industry, including experts and non-experts. The interpretability of this model is vital for effective human interaction with machine learning systems. It is also important to provide an individual-explained prediction that will gauge trust, in addition to completing and supporting set validations in model selection. Therefore, this study proposes the use of LIME and SHAP approaches to understand and explain a model developed using random forest regression to predict insurance premiums. The drawback of the SHAP algorithms, as indicated in these experiments, is the lengthy computing time and every possible computing combination needed to produce the findings. Additionally, the intentions of the experiments conducted were focused on the model's interpretability and explainability using LIME and SHAP, and not on insurance premium charge prediction. Two experiments were conducted, experiment one focused on interpreting the random forest regression model using LIME techniques while experiment two used the SHAP technique to interpret the model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.