Abstract:The increasing usage of complex Machine Learning models for decision-making has raised interest in explainable artificial intelligence (XAI). In this work, we focus on the effects of providing accessible and useful explanations to non-expert users. More specifically, we propose generic XAI design principles for contextualizing and allowing the exploration of explanations based on local feature importance. To evaluate the effectiveness of these principles for improving users' objective understanding and satisfa… Show more
“…Similarly, current research in the field of HCI-based XAI investigates how users perceive user interfaces (UI) and thereby their expectations towards the use of intelligent systems (e.g., Mualla et al 2022;Stumpf et al 2019). This research aims to reveal the influence of HCI in the field of XAI research (e.g., Abdul et al 2018;Bove et al 2022). Lastly, research addresses the impact of interactive UI elements within intelligent systems (e.g., Evans et al 2022;Khanna et al 2022).…”
Due to computational advances in the past decades, so-called intelligent systems can learn from increasingly complex data, analyze situations, and support users in their decision-making to address them. However, in practice, the complexity of these intelligent systems renders the user hardly able to comprehend the inherent decision logic of the underlying machine learning model. As a result, the adoption of this technology, especially for high-stake scenarios, is hampered. In this context, explainable artificial intelligence offers numerous starting points for making the inherent logic explainable to people. While research manifests the necessity for incorporating explainable artificial intelligence into intelligent systems, there is still a lack of knowledge about how to socio-technically design these systems to address acceptance barriers among different user groups. In response, we have derived and evaluated a nascent design theory for explainable intelligent systems based on a structured literature review, two qualitative expert studies, a real-world use case application, and quantitative research. Our design theory includes design requirements, design principles, and design features covering the topics of global explainability, local explainability, personalized interface design, as well as psychological/emotional factors.
“…Similarly, current research in the field of HCI-based XAI investigates how users perceive user interfaces (UI) and thereby their expectations towards the use of intelligent systems (e.g., Mualla et al 2022;Stumpf et al 2019). This research aims to reveal the influence of HCI in the field of XAI research (e.g., Abdul et al 2018;Bove et al 2022). Lastly, research addresses the impact of interactive UI elements within intelligent systems (e.g., Evans et al 2022;Khanna et al 2022).…”
Due to computational advances in the past decades, so-called intelligent systems can learn from increasingly complex data, analyze situations, and support users in their decision-making to address them. However, in practice, the complexity of these intelligent systems renders the user hardly able to comprehend the inherent decision logic of the underlying machine learning model. As a result, the adoption of this technology, especially for high-stake scenarios, is hampered. In this context, explainable artificial intelligence offers numerous starting points for making the inherent logic explainable to people. While research manifests the necessity for incorporating explainable artificial intelligence into intelligent systems, there is still a lack of knowledge about how to socio-technically design these systems to address acceptance barriers among different user groups. In response, we have derived and evaluated a nascent design theory for explainable intelligent systems based on a structured literature review, two qualitative expert studies, a real-world use case application, and quantitative research. Our design theory includes design requirements, design principles, and design features covering the topics of global explainability, local explainability, personalized interface design, as well as psychological/emotional factors.
“…XAI helps users understand the underlying structure of black-box machine learning models and how they produce their outputs; hence, boosting user's confidence in these models and encouraging them to use them. Unfortunately, most XAI that are in use produce explanations in a technical format that is not easily understandable to a non-ML expert (Bove, Aigrain, Lesot, Tijus, & Detyniecki, 2022), which in the case of power generation, most operational staff will be. Research shows that experts in the application domain tend to trust machine learning models when they are provided with human-friendly explanations that will enable them to understand the rationale of ML models (Bove et al, 2022).…”
Section: Problem Statementmentioning
confidence: 99%
“…Unfortunately, most XAI that are in use produce explanations in a technical format that is not easily understandable to a non-ML expert (Bove, Aigrain, Lesot, Tijus, & Detyniecki, 2022), which in the case of power generation, most operational staff will be. Research shows that experts in the application domain tend to trust machine learning models when they are provided with human-friendly explanations that will enable them to understand the rationale of ML models (Bove et al, 2022). Also, there is a requirement for distinctly different explanations for stakeholders in different application domains (Mohseni, Zarei, & Ragan, 2018).…”
Civil nuclear generation plant must maximise it’s operational uptime in order to maintain it’s viability. With aging plant and heavily regulated operating constraints, monitoring is commonplace, but identifying health indicators to pre-empt disruptive faults is challenging owing to the volumes of data involved. Machine learning (ML) models are increasingly deployed in prognostics and health management (PHM) systems in various industrial applications, however, many of these are black box models that provide good performance but little or no insight into how predictions are reached. In nuclear generation, there is significant regulatory oversight and therefore a necessity to explain decisions based on outputs from predictive models. These explanations can then enable stakeholders to trust these outputs, satisfy regulatory bodies and subsequently make more effective operational decisions. How ML model outputs convey explanations to stakeholders is important, so these explanations must be in human (and technical domain related) understandable terms. Consequently, stakeholders can rapidly interpret, then trust predictions better, and will be able to act on them more effectively. The main contributions of this paper are: 1. introduce XAI into the PHM of industrial assets and provide a novel set of algorithms that translate the explanations produced by SHAP to text-based human-interpretable explanations; and 2. consider the context of these explanations as intended for application to prognostics of critical assets in industrial applications. The use of XAI will not only help in understanding how these ML models work, but also describe the most important features contributing to predicted degradation of the nuclear generation asset.
“…One strategy for improving user understanding of AI systems is explainable AI (XAI). Machine learning developers have created a large number of explanation techniques for various types of models [4,12,2,45], and the effects of XAI on user understanding has been subject of several user studies in the AI literature [3,47,55,8,11]. However, despite efforts to create benchmarks for objectively evaluating XAI techniques [16,30,56,1], understanding how exactly XAI affects trust and behavior of lay users in human-AI interaction has remained a challenge [44,12,20,21].…”
Trust calibration is essential in AI-assisted decision-making tasks. If human users understand the reasons for a prediction of an AI model, they can assess whether or not the prediction is reasonable. Especially for high-risk tasks like mushroom hunting (where a wrong decision may be fatal), it is important that users trust or overrule the AI in the right situations. Various explainable AI methods are currently being discussed as potentially useful for facilitating understanding and to calibrate user trust. So far, however, it is unclear which approaches are most effective. Our work takes on this issue; in a between-subjects experiment with 𝑁 = 501 participants. Participants were tasked to classify the edibility of mushrooms depicted on images. We compare the effects of three XAI methods on human AI-assisted decision-making behavior: (i) Grad-CAM attributions; (ii) nearest neighbor examples; and (iii) an adoption of network dissection. For nearest neighbor examples, we found a statistically significant improvement in user performance compared to a condition without explanations. Effects did not reach statistical significance for Grad-CAM and network dissection. For the latter, however, the effect size estimators show a similar tendency as for nearest neighbor. We found that the effects also varied for different task items (i.e., mushroom images). Explanations seem to be particularly effective if they reveal possible flaws in case of wrong AI classifications or reassure users in case of correct classifications. Our results suggest that well-established methods might not be as beneficial to end users as expected and that XAI techniques must be chosen carefully in real-world scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.