In recent years, the development of information communication technologies (ICT) such as social media changed the way people communicate and engage in social movements. While conventional movements were fought in the streets, social media enabled movements to take place online. In this paper, we aim to investigate the role of social media during social movements which evolve online. Specifically, we examined Twitter communication during the #metoo debate. To this end, we applied methods from social network analysis to identify influential users participating during the debate. Conducting a manual content analysis, we classified 200 power users into roles. Likewise, a manual classification of 1,271 tweets found distinct communication categories. The results overall point to different motives: First, the communication was deeply concerned with the issue of sexual harassment, calling for attention and action. Second, we found reason to believe that self-serving and branding intentions drove participation.
In recent years, the development of information communication technologies, such as social media, has changed the way people communicate and engage in social movements. While conventional movements were fought in the streets, social media has enabled movements to take place online. In this article, we aim to investigate the role of social media during social movements that evolve online under the scope of the theory of connective action. Specifically, we examined Twitter communication during the #metoo debate. To this end, we examined two datasets (2017 and 2019) and combined methods from social media analytics to identify influential users, with a manual content analysis to classify influential users into roles. Likewise, a manual classification found distinct communication categories. Through regression analysis, we were able to gage the individual contribution of these categories and roles based on the retweet probability. This study introduces for the first time the terms of connective action starters and maintainers.
In this paper, we investigate the human ability to distinguish political social bots from humans on Twitter. Following motivated reasoning theory from social and cognitive psychology, our central hypothesis is that especially those accounts which are opinionincongruent are perceived as social bot accounts when the account is ambiguous about its nature. We also hypothesize that credibility ratings mediate this relationship. We asked N = 151 participants to evaluate 24 Twitter accounts and decide whether the accounts were humans or social bots. Findings support our motivated reasoning hypothesis for a sub-group of Twitter users (those who are more familiar with Twitter): Accounts that are opinion-incongruent are evaluated as relatively more bot-like than accounts that are opinion-congruent. Moreover, it does not matter whether the account is clearly social bot or human or ambiguous about its nature. This was mediated by perceived credibility in the sense that congruent profiles were evaluated to be more credible resulting in lower perceptions as bots.CCS Concepts: • Human-centered computing → Empirical studies in HCI.
Trust has been recognized as a central variable to explain the resistance to using automated systems (under-trust) and the overreliance on automated systems (over-trust). To achieve appropriate reliance, users’ trust should be calibrated to reflect a system’s capabilities. Studies from various disciplines have examined different interventions to attain such trust calibration. Based on a literature body of 1000+ papers, we identified 96 relevant publications which aimed to calibrate users’ trust in automated systems. To provide an in-depth overview of the state-of-the-art, we reviewed and summarized measurements of the trust calibration, interventions, and results of these efforts. For the numerous promising calibration interventions, we extract common design choices and structure these into four dimensions of trust calibration interventions to guide future studies. Our findings indicate that the measurement of the trust calibration often limits the interpretation of the effects of different interventions. We suggest future directions for this problem.
In this paper, we investigate the human ability to distinguish political social bots from humans on Twitter. Following motivated reasoning theory from social and cognitive psychology, our central hypothesis is that especially those accounts which are opinion-incongruent are perceived as social bot accounts when the account is ambiguous about its nature. We also hypothesize that credibility ratings mediate this relationship. We asked N = 151 participants to evaluate 24 Twitter accounts and decide whether the accounts were humans or social bots. Findings support our motivated reasoning hypothesis for a sub-group of Twitter users (those who are more familiar with Twitter): Accounts that are opinion-incongruent are evaluated as relatively more bot-like than accounts that are opinion-congruent. Moreover, it does not matter whether the account is clearly social bot or human or ambiguous about its nature. This was mediated by perceived credibility in the sense that congruent profiles were evaluated to be more credible resulting in lower perceptions as bots.
In this study, we investigate the role of emotions in identity-protection cognition to understand how people draw inferences from politicized (mis-)information. In doing so, we combine the identity-protection cognition theory with insights about the effects of emotions on information processing. Central to our study, we assume that the relationship between an individual's political identity and inference-conclusions of politicized information is mediated by the experienced emotions anger, anxiety, and enthusiasm. In an online study, 463 German adults were asked to interpret numerical information in two politically polarizing contexts (refugee intake and driving ban for Diesel cars) and one nonpolarizing context (treatment of skin rash). Results showed that, although emotions were mostly unrelated to political identity, they predicted performance more consistently than political identity and cognitive sophistication.
This article investigates under which conditions users on Twitter engage with or react to social bots. Based on insights from human–computer interaction and motivated reasoning, we hypothesize that (1) users are more likely to engage with human-like social bot accounts and (2) users are more likely to engage with social bots which promote content congruent to the user’s partisanship. In a preregistered 3 × 2 within-subject experiment, we asked N = 223 US Americans to indicate whether they would engage with or react to different Twitter accounts. Accounts systematically varied in their displayed humanness (low humanness, medium humanness, and high humanness) and partisanship (congruent and incongruent). In line with our hypotheses, we found that the more human-like accounts are, the greater is the likelihood that users would engage with or react to them. However, this was only true for accounts that shared the same partisanship as the user.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.