Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Recently, Altay et al. (Altay et al . 2021. J. Exp.Psychol.: Appl. ( doi:10.1037/xap0000400 )) showed that 5 min of interaction with a chatbot led to increases in positive COVID-19 vaccination attitudes and intentions in a French population. Here we replicate this effect in a vaccine-hesitant, UK-based population. We attempt to isolate what made the chatbot condition effective by controlling the amount of information provided, the trustworthiness of the information and the level of interactivity. Like Altay et al. , our experiment allowed participants to navigate a branching dialogue by choosing questions of interest about COVID-19 vaccines. Our control condition used the same questions and answers but removed participant choice by presenting the dialogues at random. Importantly, we also targeted those who were either against or neutral towards COVID-19 vaccinations to begin with, screening-out those with already positive attitudes. Replicating Altay et al. , we found a similar size increase in positive attitudes towards vaccination, and in intention to get vaccinated. Unlike Altay et al. , we found no difference between our two conditions: choosing the questions did not increase vaccine attitudes or intentions any more than our control condition. These results suggest that the attitudes of the vaccine hesitant are modifiable with exposure to in-depth, trustworthy and engaging dialogues.
Recently, Altay et al. (Altay et al . 2021. J. Exp.Psychol.: Appl. ( doi:10.1037/xap0000400 )) showed that 5 min of interaction with a chatbot led to increases in positive COVID-19 vaccination attitudes and intentions in a French population. Here we replicate this effect in a vaccine-hesitant, UK-based population. We attempt to isolate what made the chatbot condition effective by controlling the amount of information provided, the trustworthiness of the information and the level of interactivity. Like Altay et al. , our experiment allowed participants to navigate a branching dialogue by choosing questions of interest about COVID-19 vaccines. Our control condition used the same questions and answers but removed participant choice by presenting the dialogues at random. Importantly, we also targeted those who were either against or neutral towards COVID-19 vaccinations to begin with, screening-out those with already positive attitudes. Replicating Altay et al. , we found a similar size increase in positive attitudes towards vaccination, and in intention to get vaccinated. Unlike Altay et al. , we found no difference between our two conditions: choosing the questions did not increase vaccine attitudes or intentions any more than our control condition. These results suggest that the attitudes of the vaccine hesitant are modifiable with exposure to in-depth, trustworthy and engaging dialogues.
Background The COVID-19 pandemic raised novel challenges in communicating reliable, continually changing health information to a broad and sometimes skeptical public, particularly around COVID-19 vaccines, which, despite being comprehensively studied, were the subject of viral misinformation. Chatbots are a promising technology to reach and engage populations during the pandemic. To inform and communicate effectively with users, chatbots must be highly usable and credible. Objective We sought to understand how young adults and health workers in the United States assessed the usability and credibility of a web-based chatbot called Vira, created by the Johns Hopkins Bloomberg School of Public Health and IBM Research using natural language processing technology. Using a mixed method approach, we sought to rapidly improve Vira’s user experience to support vaccine decision-making during the peak of the COVID-19 pandemic. Methods We recruited racially and ethnically diverse young people and health workers, with both groups from urban areas of the United States. We used the validated Chatbot Usability Questionnaire to understand the tool’s navigation, precision, and persona. We also conducted 11 interviews with health workers and young people to understand the user experience, whether they perceived the chatbot as confidential and trustworthy, and how they would use the chatbot. We coded and categorized emerging themes to understand the determining factors for participants’ assessment of chatbot usability and credibility. Results In all, 58 participants completed a web-based usability questionnaire and 11 completed in-depth interviews. Most questionnaire respondents said the chatbot was “easy to navigate” (51/58, 88%) and “very easy to use” (50/58, 86%), and many (45/58, 78%) said its responses were relevant. The mean Chatbot Usability Questionnaire score was 70.2 (SD 12.1) and scores ranged from 40.6 to 95.3. Interview participants felt the chatbot achieved high usability due to its strong functionality, performance, and perceived confidentiality and that the chatbot could attain high credibility with a redesign of its cartoonish visual persona. Young people said they would use the chatbot to discuss vaccination with hesitant friends or family members, whereas health workers used or anticipated using the chatbot to support community outreach, save time, and stay up to date. Conclusions This formative study conducted during the pandemic’s peak provided user feedback for an iterative redesign of Vira. Using a mixed method approach provided multidimensional feedback, identifying how the chatbot worked well—being easy to use, answering questions appropriately, and using credible branding—while offering tangible steps to improve the product’s visual design. Future studies should evaluate how chatbots support personal health decision-making, particularly in the context of a public health emergency, and whether such outreach tools can reduce staff burnout. Randomized studies should also be conducted to measure how chatbots countering health misinformation affect user knowledge, attitudes, and behavior.
BACKGROUND The COVID-19 pandemic raised novel challenges in communicating reliable, continually changing health information to a broad and sometimes skeptical public, particularly around COVID-19 vaccines, which despite being comprehensively studied were the subject of viral misinformation. Chatbots are a promising technology to reach and engage populations during the pandemic. To inform and communicate effectively with users, chatbots must be highly usable and credible. OBJECTIVE We sought to understand how young adults and health workers in the U.S. assessed the usability and credibility of a web-based chatbot called Vira, created by the Johns Hopkins Bloomberg School of Public Health and IBM Research using natural language processing technology. Using a mixed-method approach, we sought to rapidly improve Vira’s user experience to support vaccine decision-making during the peak of the COVID-19 pandemic. METHODS We recruited racially and ethnically diverse young people and health workers, with both groups from urban areas of the U.S. We used the validated Chatbot Usability Questionnaire (CUQ) to understand the tool’s navigation, precision, and persona. We also conducted 11 interviews with health workers and young people to understand the user experience, whether they perceived the chatbot as confidential and trustworthy, and how they would use the chatbot. We coded and categorized emerging themes to understand the determining factors for participants’ assessment of chatbot usability and credibility. RESULTS Fifty-eight participants completed an online usability questionnaire and 11 completed in-depth interviews. Most questionnaire respondents (86-88%) said the chatbot was “easy to navigate” and “very easy to use,” and many (78%) said responses were relevant. The mean CUQ score was 70.2 ± 12.1 and scores ranged from 40.6 to 95.3. Interview participants felt the chatbot achieved high usability due to its strong functionality, performance, and perceived confidentiality, and that the chatbot could attain high credibility with a redesign of its cartoonish visual persona. Young people said they would use the chatbot to discuss vaccination with hesitant friends or family members, while health workers used or anticipated using the chatbot to support community outreach, save time, and stay up to date. CONCLUSIONS This formative study conducted during the pandemic’s peak provided user feedback for an iterative redesign of Vira. Taking a mixed-method approach provided multidimensional feedback, identifying how the chatbot worked well—being easy to use, answering questions appropriately, and using credible branding—while offering tangible steps to improve the product’s visual design. Future studies should evaluate how chatbots support personal health decision-making, particularly in the context of a public health emergency, and whether such outreach tools can reduce staff burnout. Randomized studies should also measure how chatbots countering health misinformation affect user knowledge, attitudes, and behavior.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.