Changing collective behaviour and supporting non-pharmaceutical interventions is an important component in mitigating virus transmission during a pandemic. In a large international collaboration (Study 1, N = 49,968 across 67 countries), we investigated self-reported factors associated with public health behaviours (e.g., spatial distancing and stricter hygiene) and endorsed public policy interventions (e.g., closing bars and restaurants) during the early stage of the COVID-19 pandemic (April-May 2020). Respondents who reported identifying more strongly with their nation consistently reported greater engagement in public health behaviours and support for public health policies. Results were similar for representative and non-representative national samples. Study 2 (N = 42 countries) conceptually replicated the central finding using aggregate indices of national identity (obtained using the World Values Survey) and a measure of actual behaviour change during the pandemic (obtained from Google mobility reports). Higher levels of national identification prior to the pandemic predicted lower mobility during the early stage of the pandemic (r = −0.40). We discuss the potential implications of links between national identity, leadership, and public health for managing COVID-19 and future pandemics.
At the beginning of 2020, COVID-19 became a global problem. Despite all the efforts to emphasize the relevance of preventive measures, not everyone adhered to them. Thus, learning more about the characteristics determining attitudinal and behavioral responses to the pandemic is crucial to improving future interventions. In this study, we applied machine learning on the multi-national data collected by the International Collaboration on the Social and Moral Psychology of COVID-19 (N = 51,404) to test the predictive efficacy of constructs from social, moral, cognitive, and personality psychology, as well as socio-demographic factors, in the attitudinal and behavioral responses to the pandemic. The results point to several valuable insights. Internalized moral identity provided the most consistent predictive contribution—individuals perceiving moral traits as central to their self-concept reported higher adherence to preventive measures. Similar was found for morality as cooperation, symbolized moral identity, self-control, open-mindedness, collective narcissism, while the inverse relationship was evident for the endorsement of conspiracy theories. However, we also found a non-negligible variability in the explained variance and predictive contributions with respect to macro-level factors such as the pandemic stage or cultural region. Overall, the results underscore the importance of morality-related and contextual factors in understanding adherence to public health recommendations during the pandemic.
No abstract
Artificial Intelligence (AI) algorithms are now able to produce text virtually indistinguishable from text written by humans across a variety of domains. A key question, then, is whether people believe content from AI as much as content from humans. Trust in the (human generated) news media has been decreasing over time and AI is viewed as lacking human desires, and emotions, suggesting that AI news may be viewed as more accurate. Contrary to this, two preregistered experiments conducted on representative U.S. samples (combined N = 4,034) showed that people rated news produced by AI as being less accurate than news produced by humans. When news items were tagged as produced by AI (compared to a human), people were more likely to incorrectly rate them as inaccurate when they were actually true, and more likely to correctly rate them as inaccurate when they were indeed false. These results were robust to experimental paradigm (separate and joint evaluations), news item (actual veracity, age), and several respondent characteristics (e.g., political orientation). This effect is particularly important given the increasing use of AI algorithms in news production, and the associated ethical and governance pressures to disclose their use.
Artificial Intelligence (AI) is pervading the government and transforming how public services are provided to consumers—from allocation of government benefits to enforcement of the law, monitoring of risks, and provision of services. Despite technological improvements, AI systems are fallible and may err. How do consumers respond when learning of AI’s failures? In thirteen preregistered studies (N = 3,724) across policy areas, we show that algorithmic failures are generalized more broadly than human failures. We term this effect algorithmic transference, as it is an inferential process that generalizes (i.e., transfers) information about one member of a group to another member of that same group. Rather than reflecting generalized algorithm aversion, algorithmic transference is rooted in social categorization: it stems from how people perceive a group of AI systems versus a group of humans. Because AI systems are perceived as more homogeneous than people, failure information about one AI algorithm is transferred to another algorithm at a higher rate than failure information about a person is transferred to another person. Assessing AI’s impact on consumers and societies, we show how the premature or mismanaged deployment of faulty AI technologies may undermine the very institutions that AI systems are meant to modernize.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.