Hyper-partisan misinformation has become a major public concern. In order to examine what type of misinformation label can mitigate hyper-partisan misinformation sharing on social media, we conducted a 4 (label type: algorithm, community, third-party fact-checker, and no label) X 2 (post ideology: liberal vs. conservative) between-subjects online experiment (N = 1,677) in the context of COVID-19 health information. The results suggest that for liberal users, all labels reduced the perceived accuracy and believability of fake posts regardless of the posts' ideology. In contrast, for conservative users, the efficacy of the labels depended on whether the posts were ideologically consistent: algorithmic labels were more effective in reducing the perceived accuracy and believability of fake conservative posts compared to community labels, whereas all labels were effective in reducing their belief in liberal posts. Our results shed light on the differing effects of various misinformation labels dependent on people's political ideology.
AI technologies continue to advance from digital assistants to assisted decision-making. However, designing AI remains a challenge given its unknown outcomes and uses. One way to expand AI design is by centering stakeholders in the design process. We conduct co-design sessions with gig workers to explore the design of gig worker-centered tools as informed by their driving patterns, decisions, and personal contexts. Using workers' own data as well as city-level data, we create probes-interactive data visuals-that participants explore to surface the well-being and positionalities that shape their work strategies. We describe participant insights and corresponding AI design considerations surfaced from data probes about: 1) workers' well-being trade-offs and positionality constraints, 2) factors that impact well-being beyond those in the data probes, and 3) instances of unfair algorithmic management. We discuss the implications for designing data probes and using them to elevate worker-centered AI design as well as for worker advocacy.
CCS CONCEPTS• Human-centered computing → Human computer interaction (HCI).
Hyper-partisan misinformation has become a major public concern. In order to examine what type of misinformation label can mitigate hyper-partisan misinformation sharing on social media, we conducted a 4 (label type: algorithm, community, third-party fact-checker, and no label) X 2 (post ideology: liberal vs. conservative) between-subjects online experiment (𝑁 = 1,677) in the context of COVID-19 health information. The results suggest that for liberal users, all labels reduced the perceived accuracy and believability of fake posts regardless of the posts' ideology. In contrast, for conservative users, the efficacy of the labels depended on whether the posts were ideologically consistent: algorithmic labels were more effective in reducing the perceived accuracy and believability of fake conservative posts compared to community labels, whereas all labels were effective in reducing their belief in liberal posts. Our results shed light on the differing effects of various misinformation labels dependent on people's political ideology.CCS Concepts: • Human-centered computing → Empirical studies in collaborative and social computing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.