Artificial Intelligence (AI) is supposed to perform tasks autonomously, make competent decisions, and interact socially with people. From a psychological perspective, AI can thus be expected to impact users’ three Basic Psychological Needs (BPNs), namely (i) autonomy, (ii) competence, and (iii) relatedness to others. While research highlights the fulfillment of these needs as central to human motivation and well-being, their role in the acceptance of AI applications has hitherto received little consideration. Addressing this research gap, our study examined the influence of BPN Satisfaction on Intention to Use (ITU) an AI assistant for personal banking. In a 2×2 factorial online experiment, 282 participants (154 males, 126 females, two non-binary participants) watched a video of an AI finance coach with a female or male synthetic voice that exhibited either high or low agency (i.e., capacity for self-control). In combination, these factors resulted either in AI assistants conforming to traditional gender stereotypes (e.g., low-agency female) or in non-conforming conditions (e.g., high-agency female). Although the experimental manipulations had no significant influence on participants’ relatedness and competence satisfaction, a strong effect on autonomy satisfaction was found. As further analyses revealed, this effect was attributable only to male participants, who felt their autonomy need significantly more satisfied by the low-agency female assistant, consistent with stereotypical images of women, than by the high-agency female assistant. A significant indirect effects model showed that the greater autonomy satisfaction that men, unlike women, experienced from the low-agency female assistant led to higher ITU. The findings are discussed in terms of their practical relevance and the risk of reproducing traditional gender stereotypes through technology design.
Background Despite efforts, the UK death rate from asthma is the highest in Europe, and 65% of people with asthma in the United Kingdom do not receive the professional care they are entitled to. Experts have recommended the use of digital innovations to help address the issues of poor outcomes and lack of care access. An automated SMS text messaging–based conversational agent (ie, chatbot) created to provide access to asthma support in a familiar format via a mobile phone has the potential to help people with asthma across demographics and at scale. Such a chatbot could help improve the accuracy of self-assessed risk, improve asthma self-management, increase access to professional care, and ultimately reduce asthma attacks and emergencies. Objective The aims of this study are to determine the feasibility and usability of a text-based conversational agent that processes a patient’s text responses and short sample voice recordings to calculate an estimate of their risk for an asthma exacerbation and then offers follow-up information for lowering risk and improving asthma control; assess the levels of engagement for different groups of users, particularly those who do not access professional services and those with poor asthma control; and assess the extent to which users of the chatbot perceive it as helpful for improving their understanding and self-management of their condition. Methods We will recruit 300 adults through four channels for broad reach: Facebook, YouGov, Asthma + Lung UK social media, and the website Healthily (a health self-management app). Participants will be screened, and those who meet inclusion criteria (adults diagnosed with asthma and who use WhatsApp) will be provided with a link to access the conversational agent through WhatsApp on their mobile phones. Participants will be sent scheduled and randomly timed messages to invite them to engage in dialogue about their asthma risk during the period of study. After a data collection period (28 days), participants will respond to questionnaire items related to the quality of the interaction. A pre- and postquestionnaire will measure asthma control before and after the intervention. Results This study was funded in March 2021 and started in January 2022. We developed a prototype conversational agent, which was iteratively improved with feedback from people with asthma, asthma nurses, and specialist doctors. Fortnightly reviews of iterations by the clinical team began in September 2022 and are ongoing. This feasibility study will start recruitment in January 2023. The anticipated completion of the study is July 2023. A future randomized controlled trial will depend on the outcomes of this study and funding. Conclusions This feasibility study will inform a follow-up pilot and larger randomized controlled trial to assess the impact of a conversational agent on asthma outcomes, self-management, behavior change, and access to care. International Registered Report Identifier (IRRID) PRR1-10.2196/42965
The aim of this work was to develop a valid and reliable scale to measure Basic Psychological Need Satisfaction for technology use (BPN-TU). According to the Self-Determination Theory, satisfaction of the Basic Psychological Needs (BPNs) for Autonomy, Competence, and Relatedness is crucial to well-being and autonomous motivation. Research into the role of BPN Satisfaction in technology use is scarce, partly due to a lack of appropriate measuring tools. To develop a pool of original BPN-TU scale items, we held 10 interviews. Based on these items, we conducted four validation studies with four independent samples (total N = 821), collecting user responses to different technologies: digital voice assistant, exoskeleton, chatbot and social robot. Good model fit was supported by confirmatory factor analyses for a twelve-item scale, containing three items each for satisfaction of users' Autonomy, Competence, Relatedness to Others, and Relatedness to Technology. The scale was validated in English and German.
Concerns about threats to human autonomy feature prominently in the field of AI ethics. One aspect of this concern relates to the use of AI systems for problematically manipulative influence. In response to this, the European Union's draft AI Act (AIA) includes a prohibition on AI systems deploying subliminal techniques that alter people's behavior in ways that are reasonably likely to cause harm (Article 5(1)(a)). Critics have argued that the term 'subliminal techniques' is too narrow to capture the target cases of AI-based manipulation. We propose a definition of 'subliminal techniques' that (a) is grounded on a plausible interpretation of the legal text; (b) addresses all or most of the underlying ethical concerns motivating the prohibition; (c) is defensible from a scientific and philosophical perspective; and (d) does not over-reach in ways that impose excessive administrative and regulatory burdens. The definition is meant to provide guidance for design teams seeking to pursue responsible and ethically aligned AI innovation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.