In the last few years, eXplainable Artificial Intelligence (XAI) has been attracting attention in data analytics, as it shows great potential in interpreting the results of complex machine learning models in the application of medical problems. The nutshell is that the outcome of the machine learning-based applications should be understood by end users, specially in medical data context where decisions have to be carefully taken. As such, many efforts have been carried out to explain the outcome of a deep learning complex model in processes where image recognition and classification are involved, as in the case of Melanoma cancer. This paper represents a first attempt (to the best of our knowledge) to experimentally and technically investigate the explainability of modern XAI methods Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP), in terms of reproducibility of results and execution time on a Melanoma image classification data set. This paper shows that XAI methods provide advantages on model result interpretation in Melanoma image classification. Concretely, LIME performs better than SHAP gradient explainer in terms of reproducibility and execution time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.