Abstract:Adaptation decisions made by context-aware applications on behalf of users are based on evaluations of current context and preferences of users. This context information is imperfect by nature and can cause applications to behave in ways that users do not expect. Applications that exhibit unwanted behaviour will negatively impact their usability and violate the trust users have in them. Intelligibility and control in applications can help users to understand why they decided to behave in certain ways, and to f… Show more
“…The other 17 papers solely provide recommendations for addressing RI in the design and implementation of AI technologies. While four of them discuss technical approaches and methods to address principles such as trust and transparency in AI, these were classified as “solely recommendations” because they do not report the respective methods being actually applied in existing AI technologies ( Ferreira et al, 2019 ; Fong et al, 2012 ; Hoque et al, 2009 ; Vance et al, 2018 ).…”
Background and Objectives
Artificial intelligence (AI) is widely positioned to become a key element of intelligent technologies used in the long-term care (LTC) for older adults. The increasing relevance and adoption of AI has encouraged debate over the societal and ethical implications of introducing and scaling AI. This scoping review investigates how the design and implementation of AI technologies in LTC is addressed responsibly: so called responsible innovation (RI).
Research Design and Methods
We conducted a systematic literature search in five electronic databases using concepts related to LTC, AI and RI. We then performed a descriptive and thematic analysis to map the key concepts, types of evidence and gaps in the literature.
Results
After reviewing 3,339 papers, 25 papers were identified that met our inclusion criteria. From this literature, we extracted three overarching themes: user-oriented AI innovation; framing AI as a solution to RI issues; and context-sensitivity. Our results provide an overview of measures taken and recommendations provided to address responsible AI innovation in LTC.
Discussion and Implications
The review underlines the importance of the context of use when addressing responsible AI innovation in LTC. However, limited empirical evidence actually details how responsible AI innovation is addressed in context. Therefore, we recommend expanding empirical studies on RI at the level of specific AI technologies and their local contexts of use. Also, we call for more specific frameworks for responsible AI innovation in LTC to flexibly guide researchers and innovators. Future frameworks should clearly distinguish between RI processes and outcomes.
“…The other 17 papers solely provide recommendations for addressing RI in the design and implementation of AI technologies. While four of them discuss technical approaches and methods to address principles such as trust and transparency in AI, these were classified as “solely recommendations” because they do not report the respective methods being actually applied in existing AI technologies ( Ferreira et al, 2019 ; Fong et al, 2012 ; Hoque et al, 2009 ; Vance et al, 2018 ).…”
Background and Objectives
Artificial intelligence (AI) is widely positioned to become a key element of intelligent technologies used in the long-term care (LTC) for older adults. The increasing relevance and adoption of AI has encouraged debate over the societal and ethical implications of introducing and scaling AI. This scoping review investigates how the design and implementation of AI technologies in LTC is addressed responsibly: so called responsible innovation (RI).
Research Design and Methods
We conducted a systematic literature search in five electronic databases using concepts related to LTC, AI and RI. We then performed a descriptive and thematic analysis to map the key concepts, types of evidence and gaps in the literature.
Results
After reviewing 3,339 papers, 25 papers were identified that met our inclusion criteria. From this literature, we extracted three overarching themes: user-oriented AI innovation; framing AI as a solution to RI issues; and context-sensitivity. Our results provide an overview of measures taken and recommendations provided to address responsible AI innovation in LTC.
Discussion and Implications
The review underlines the importance of the context of use when addressing responsible AI innovation in LTC. However, limited empirical evidence actually details how responsible AI innovation is addressed in context. Therefore, we recommend expanding empirical studies on RI at the level of specific AI technologies and their local contexts of use. Also, we call for more specific frameworks for responsible AI innovation in LTC to flexibly guide researchers and innovators. Future frameworks should clearly distinguish between RI processes and outcomes.
With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems.A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advicegiving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.
“…This is achieved by developing appropriate feedback forms by the developers. The changes/modifications performed by users are mapped back to the context aware rules by using different logics/algorithms (Predicate Logic and Defeasible Logic) [14]. Although their approach is promising providing user control and intelligibility, the feedback forms developed are not much flexible and user friendly and require too much low level options to be performed by end users.…”
A smart home is a context-aware system that adapts itself autonomously in response to context to satisfy user needs and to improve safety, security, resource use, etc. On the one hand, software autonomy serves the basic purpose of pervasive computing by reducing interaction with the users, easing the use of the system, and reducing the user distraction. On the other hand, it takes control away from the users of the applications, making users feel loss of control over their contextaware applications. The situations including applications may not behave as expected, user preferences may change over time or users may want to add new behaviors, etc, may arise and require smart home users to interact with the applications to control their behavior. This research addresses this issue and proposes an approach, which would provide a wider support of user control by exposing and manipulating (1) application parameters, (2) adaptation logic(s) thus allowing users to add new behaviors. Using this approach a complete system is developed in order to see its effectiveness; furthermore the system is tested on three different context aware applications and a preliminary usability study is done to evaluate the system effectiveness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.