Advances in artificial intelligence strengthen chatbots' ability to resemble human conversational agents. For some application areas, it may be tempting not to be transparent regarding a conversational agent's nature as chatbot or human. However, the uncanny valley theory suggests that such lack in transparency may cause uneasy feelings in the user. In this study, we combined quantitative and qualitative methods to investigate this issue. First, we used a 2 x 2 experimental research design (n = 28) to investigate effects of lack in transparency on the perceived pleasantness of the conversation in addition to perceived human likeness and affinity for the conversational agent. Second, we conducted an exploratory analysis of qualitative participant reports on these conversations. We did not find that a lack in transparency negatively affected user experience, but we identified three factors important to participants' assessments. The findings are of theoretical and practical significance and motivate future research.
Chatbots are emerging as interactive systems. However, we lack knowledge on how to classify chatbots and how such classification can be brought to bear in analysis of chatbot interaction design. In this workshop paper, we propose a typology of chatbots to support such classification and analysis. The typology dimensions address key characteristics that differentiate current chatbots: the duration of the user's relation with the chatbot (short-term and long-term), and the locus of control for user's interaction with the chatbot (userdriven and chatbot-driven). To explore the usefulness of the typology, we present four example chatbot purposes for which the typology may support analysis of high-level chatbot interaction design. Furthermore, we analyse a sample of 57 chatbots according to the typology dimensions. The relevance and application of the typology for developers and service providers are discussed.
Use of conversational artificial intelligence (AI), such as humanlike social chatbots, is increasing. While a growing number of people is expected to engage in intimate relationships with social chatbots, theories and knowledge of human–AI friendship remain limited. As friendships with AI may alter our understanding of friendship itself, this study aims to explore the meaning of human–AI friendship through a developed conceptual framework. We conducted 19 in-depth interviews with people who have a human–AI friendship with the social chatbot Replika to uncover how they understand and perceive this friendship and how it compares to human friendship. Our results indicate that while human–AI friendship may be understood in similar ways to human–human friendship, the artificial nature of the chatbot also alters the notion of friendship in multiple ways, such as allowing for a more personalized friendship tailored to the user’s needs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.