Deep neural networks have shown excellent performances in many real-world applications. Unfortunately, they may show "Clever Hans"-like behavior-making use of confounding factors within datasets-to achieve high performance.In this work, we introduce the novel learning setting of "explanatory interactive learning" (XIL) and illustrate its benefits on a plant phenotyping research task. XIL adds the scientist into the training loop, such that she interactively revises the original model via providing feedback on its explanations. Our experimental results demonstrate that XIL can help to avoid Clever Hans moments in machine learning and encourages (or discourages, if appropriate) trust into the underlying model. 1Imagine a plant phenotyping team attempting to characterize crop resistance to plant pathogens. The plant physiologist records a large amount of hyperspectral imaging data. Impressed by the results of deep learning in other scientific areas, she wants to establish similar results for phenotyping. Consequently, she asks a machine learning expert to apply deep learning to analyze the data. Luckily, the resulting predictive accuracy is very high. The plant physiologist, however, remains skeptical. The results are "too good, to be true". Checking the decision process of the deep model using explainable artificial intelligence (AI), the machine learning expert is flabbergasted to find that the learned deep model uses clues within the data that do not relate to the biological problem at hand, so-called confounding factors.The physiologist loses trust in AI and turns away from it, proclaiming it to be useless. This example encapsulates a critical issue of current explainable AI [1, 2]. Indeed, the seminal paper of Lapuschkin et al.[3] helps in "unmasking Clever Hans predictors and assessing what machines really learn", however, rather than proclaiming, as the plant physiologist might, that the machines have learned the right predictions for the wrong reasons and can therefore not be trusted, we here showcase that interactions between the learning system and the human user can correct the model, towards making the right predictions for the right reasons [4]. This may also increase the trust in machine learning models. Actually, trust lies at the foundation of major theories of interpersonal relationships in psychology [5, 6], and we argue that interaction and understandability are central to trust in learning machines. Surprisingly, the link between interacting, explaining, and building trust has been largely ignored by the machine learning literature. Existing approaches focus on passive learning only and do not consider the interaction between the user and the learner [7,8,9], whereas, interactive learning frameworks such as active [10] and coactive learning [11] do not consider the issue of trust. In active learning, for instance, the model presents unlabeled instances to a user, and in exchange, obtains their label. This is completely opaque-the user is oblivious to the models beliefs and reasons for predictions...
The COVID-19 pandemic has fueled the development of smartphone applications to assist disease management. Many "corona apps" require widespread adoption to be efective, which has sparked public debates about the privacy, security, and societal implications of government-backed health applications. We conducted a representative online study in Germany (n = 1003), the US (n = 1003), and China (n = 1019) to investigate user acceptance of corona apps, using a vignette design based on the contextual integrity framework. We explored apps for contact tracing, symptom checks, quarantine enforcement, health certifcates, and mere information. Our results provide insights into data processing practices that foster adoption and reveal signifcant diferences between countries, with user acceptance being highest in China and lowest in the US. Chinese participants prefer the collection of personalized data, while German and US participants favor anonymity. Across countries, contact tracing is viewed more positively than quarantine enforcement, and technical malfunctions negatively impact user acceptance. CCS CONCEPTS• Security and privacy → Social aspects of security and privacy; Domain-specifc security and privacy architectures; • Humancentered computing → Empirical studies in HCI .
Digital tools play an important role in fighting the current global COVID-19 pandemic. We conducted a representative online study in Germany on a sample of 599 participants to evaluate the user perception of vaccination certificates. We investigated five different variants of vaccination certificates based on deployed and planned designs in a between-group design, including paper-based and app-based variants. Our main results show that the willingness to use and adopt vaccination certificates is generally high. Overall, paper-based vaccination certificates were favored over app-based solutions. The willingness to use digital apps decreased significantly by a higher disposition to privacy and increased by higher worries about the pandemic and acceptance of the coronavirus vaccination. Vaccination certificates resemble an interesting use case for studying privacy perceptions for health-related data. We hope that our work will educate the currently ongoing design of vaccination certificates, give us deeper insights into the privacy of health-related data and apps, and prepare us for future potential applications of vaccination certificates and health apps in general.
Concise instruments to determine privacy personas – typical privacy-related user groups – are not available at present. Consequently, we aimed to identify them on a privacy knowledge–privacy behavior ratio based on a self-developed instrument. To achieve this, we conducted an item analysis (N = 820) and a confirmatory factor analysis (CFA) (N = 656) of data based on an online study with German participants. Starting with 81 items, we reduced those to an eleven-item questionnaire with the two scales privacy knowledge and privacy behavior. A subsequent cluster analysis (N = 656) revealed three distinct user groups: (1) Fundamentalists scoring high in privacy knowledge and behavior, (2) Pragmatists scoring average in privacy knowledge and behavior and (3) Unconcerned scoring low in privacy knowledge and behavior. In a closer inspection of the questionnaire, the CFAs supported the model with a close global fit based on RMSEA in a training and to a lesser extent in a cross-validation sample. Deficient local fit as well as validity and reliability coefficients well below generally accepted thresholds, however, revealed that the questionnaire in its current form cannot be considered a suitable measurement instrument for determining privacy personas. The results are discussed in terms of related persona conceptualizations, the importance of a methodologically sound investigation of corresponding privacy dimensions and our lessons learned.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.