In a wide range of industries and academic fields, artificial intelligence is becoming increasingly prevalent. AI models are taking on more crucial decision-making tasks as they grow in popularity and performance. Although AI models, particularly machine learning models, are successful in research, they have numerous limitations and drawbacks in practice. Furthermore, due to the lack of transparency behind their behavior, users need more understanding of how these models make specific decisions, especially in complex state-of-the-art machine learning algorithms. Complex machine learning systems utilize less transparent algorithms, thereby exacerbating the problem. This survey analyzes the significance and evolution of explainable AI (XAI) research across various domains and applications. Throughout this study, a rich repository of explainability classifications and summaries has been developed, along with their applications and practical use cases. We believe this study will make it easier for researchers to understand all explainability methods and access their applications simultaneously.
In this paper, we propose an atlas-based method for hippocampus-amygdala complex segmentation. An atlas is registered on all subjects and its transformation is calculated for each subject. This transformation is applied to the structural segmentation of the complex in atlas to construct an initial surface for the hippocampus-amygdala complex of each subject. A possibility approach is introduced for the segmentation process. Two different kinds of deformation based on edges and information obtained from tissue segmentation are used to find different parts of the complex. A new energy is defined to use tissue information. This energy is adopted to expand the model to embed dominant gray matter points in the volume and also withdraw from dominant white matter and CSF points. The initial shape is divided into several parts. In the normal direction of the center of each part, we construct a profile which search for the best point that maximizes this new energy. This algorithm is reliable for finding the overall shape of the complex. It overcomes the poor features of the complex such as weak edges and noise. The algorithm is examined on 5 different subjects and validated using two different validation methods.
The use of AI and machine learning models in the industry is rapidly increasing. Because of this growth and the noticeable performance of these models, more mission-critical decision-making intelligent systems have been developed. Despite their success, when used for decision-making, AI solutions have a significant drawback: transparency. The lack of transparency behind their behaviors, particularly in complex state-of-the-art machine learning algorithms, leaves users with little understanding of how these models make specific decisions. To address this issue, algorithms such as LIME and SHAP (Kernel SHAP) have been introduced. These algorithms aim to explain AI models by generating data samples around an intended test instance by perturbing the various features. This process has the drawback of potentially generating invalid data points outside of the data domain. In this paper, we aim to improve LIME and SHAP by using a pre-trained Variational AutoEncoder (VAE) on the training dataset to generate realistic data around the test instance. We also employ a sensitivity feature importance with Boltzmann distribution to aid in explaining the behavior of the black-box model surrounding the intended test instance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.