Research has shown that employing social cues (e.g., name, human-like avatar) in chatbot design enhances users’ social presence perceptions and their chatbot usage intentions. However, the picture is less clear for the social cue of chatbot response time. While some researchers argue that instant responses make chatbots appear unhuman-like, others suggest that delayed responses are perceived less positively. Drawing on social response theory and expectancy violations theory, this study investigates whether users’ prior experience with chatbots clarifies the inconsistencies in the literature. In a lab experiment (N = 202), participants interacted with a chatbot that responded either instantly or with a delay. The results reveal that a delayed response time has opposing effects on social presence and usage intentions and shed light on the differences between novice users and experienced users – that is, those who have not interacted with a chatbot before vs. those who have. This study contributes to information systems literature by identifying prior experience as a key moderating factor that shapes users’ social responses to chatbots and by reconciling inconsistencies in the literature regarding the role of chatbot response time. For practitioners, this study points out a drawback of the widely adopted “one-design-fits-all” approach to chatbot design.
Millions of people experience mental health issues each year, increasing the necessity for health-related services. One emerging technology with the potential to help address the resulting shortage in health care providers and other barriers to treatment access are conversational agents (CAs). CAs are software-based systems designed to interact with humans through natural language. However, CAs do not live up to their full potential yet because they are unable to capture dynamic human behavior to an adequate extent to provide responses tailored to users’ personalities. To address this problem, we conducted a design science research (DSR) project to design personality-adaptive conversational agents (PACAs). Following an iterative and multi-step approach, we derive and formulate six design principles for PACAs for the domain of mental health care. The results of our evaluation with psychologists and psychiatrists suggest that PACAs can be a promising source of mental health support. With our design principles, we contribute to the body of design knowledge for CAs and provide guidance for practitioners who intend to design PACAs. Instantiating the principles may improve interaction with users who seek support for mental health issues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.