What factors influence trust in online information?Americans increasingly get information from social media, public distrust in the mainstream media is growing and "fake news" is an important new phenomenon. This paper examines the factors that influence trust in scientific claims posted via social media, including the use of hyperlinks and readers' values. The paper describes a crowdsourcing-based experimental design using Amazon's Mechanical Turk platform. The core of the experiment was a set of 10 scientific findings reported in open-access, peer-reviewed scientific journals, which were in turn linked to in articles in both the mainstream media and "fake news" sites. Data analysis involved exploration of relationships between trust and the presence or absence of hyperlinks, and between trust and human values using nonparametric statistical methods. In terms of the influence of hyperlinks on trust, inclusion of hyperlinks to scientific journals, mainstream media articles, and even hidden URLs led to higher trust than hyperlinks to "fake news" sites or posts without hyperlinks (p < 0.001). Participants who clicked on hyperlinks to scientific articles placed higher trust in the claims than those who did not (p < 0.001). In terms of the influence of values on trust, values had the most impact in cases where individuals saw, but decided not to click on, hyperlinks; this finding seems to indicate that in the absence of firsthand examination of the hyperlinked sites, participants tend to rely more heavily on their values to determine their trust in a scientific claim. These findings indicate that both the presence and absence of hyperlinks and the values of the reader both significantly impact trust judgments.
Artificial intelligence (AI), including machine learning (ML), is widely viewed as having substantial transformative potential across society, and novel implementations of these technologies promise new modes of living, working, and community engagement. Data and the algorithms that operate upon it thus operate under an expansive ethical valence, bearing consequence to both the development of these potentially transformative technologies and our understanding of how best to manage and support its impact. This paper reports upon an interview‐driven study of stakeholders engaged with technology development, policy, and law relating to AI. Among our participating stakeholders, unexpected outcomes and flawed implementations of AI, especially those leading to negative social consequences, are often attributed to ill‐structured, incomplete, or biased data, and the algorithms and interpretations that might produce negative social consequence are seen as neutrally representing the data, or otherwise blameless in that consequence. We propose a more complex infrastructural view of the tools, data, and operation of AI systems as necessary to the production of social good, and explore how representations of the successes and failures of these systems, even among experts, tend to valorize algorithmic analysis and locate fault at the quality of the data rather than the implementation of systems.
Vulnerable populations (e.g., older adults) can be hard to reach online. During a pandemic like COVID-19 when much research data collection must be conducted online only, these populations risk being further underrepresented. This paper explores methodological strategies for rigorous, efficient survey research with a large number of older adults online, focusing on (1) the design of a survey instrument both comprehensible and usable by older adults, (2) rapid collection (within hours) of data from a large number of older adults, and (3) validation of data using attention checks, independent validation of age, and detection of careless responses to ensure data quality. These methodological strategies have important implications for the inclusion of older adults in online research.
Understanding the factors that influence trust in public health information is critical for designing successful public health campaigns during pandemics such as
COVID
‐19. We present findings from a cross‐sectional survey of 454
US
adults—243 older (65+) and 211 younger (18–64) adults—who responded to questionnaires on human values, trust in
COVID
‐19 information sources, attention to information quality, self‐efficacy, and factual knowledge about
COVID
‐19. Path analysis showed that trust in direct personal contacts (
B
= 0.071,
p =
.04) and attention to information quality (
B
= 0.251,
p
< .001) were positively related to self‐efficacy for coping with
COVID
‐19. The human value of self‐transcendence, which emphasizes valuing others as equals and being concerned with their welfare, had significant positive indirect effects on self‐efficacy in coping with
COVID
‐19 (mediated by attention to information quality; effect = 0.049, 95%
CI
0.001–0.104) and factual knowledge about
COVID
‐19 (also mediated by attention to information quality; effect = 0.037, 95%
CI
0.003–0.089). Our path model offers guidance for fine‐tuning strategies for effective public health messaging and serves as a basis for further research to better understand the societal impact of
COVID
‐19 and other public health crises.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.