The online environment dramatically expands the number of ways people can encounter news but there remain questions of whether these abundant opportunities facilitate news exposure diversity. This project examines key questions regarding how internet users arrive at news and what kinds of news they encounter. We account for a multiplicity of avenues to news online, some of which have never been analyzed: (1) direct access to news websites, (2) social networks, (3) news aggregators, (4) search engines, (5) webmail, and (6) hyperlinks in news. We examine the extent to which each avenue promotes news exposure and also exposes users to news sources that are left leaning, right leaning, and centrist. When combined with information on individual political leanings, we show the extent of dissimilar, centrist, or congenial exposure resulting from each avenue. We rely on web browsing history records from 636 social media users in the US paired with survey self-reports, a unique data set that allows us to examine both aggregate and individual-level exposure. Visits to news websites account for about 2 percent of the total number of visits to URLs and are unevenly distributed among users. The most widespread ways of accessing news are search engines and social media platforms (and hyperlinks within news sites once people arrive at news). The two former avenues also increase dissimilar news exposure, compared to accessing news directly, yet direct news access drives the highest proportion of centrist exposure.
This study investigates to what extent specific features of news articles about election campaigns impact reader engagement and civility in news comments. Using content analysis of articles ( N = 830) and comments ( N = 29,421) published during the 2015 Portuguese Legislative elections, we test the impact of negative coverage, issue coverage and game coverage (politics as a game) on the number of comments that an article receives and the level of civility thereof. Additionally, we explore how affective polarisation of a commenter may moderate the effects on incivility. Findings show that negativity towards political actors in an article is tied to both an increase in the number of comments and their level of incivility. Game coverage only led to a significant increase in the number of comments, while actor-related positivity was also related to an increase in incivility. Issue coverage had neither positive nor negative effects. The results inform newsrooms and academics about the implications of different types of election reporting, while accounting for features of news articles that are typically not integrated in a single study.
Lay Summary
In the era of unprecedented political divides and misinformation, artificial intelligence (AI) and algorithms are often seen as the culprits. In contrast to these dominant narratives, we argued that AI might be seen as being less biased than a human in online political contexts. We relied on six preregistered experiments in three countries (the United Sates, Spain, Poland) to test whether internet users perceive AI and AI-assisted humans more favorably than simply humans; (a) across various distinct scenarios online, and (b) when exposing people to opposing political information on a range of contentious issues. Contrary to our expectations, human agents were consistently perceived more favorably than AI except when recommending news. These findings suggest that people prefer human intervention in most online political contexts.
Hateful content online is a concern for social media platforms, policymakers, and the public. This has led high-profile content platforms, such as Facebook, to adopt algorithmic content-moderation systems; however, the impact of algorithmic moderation on user perceptions is unclear. We experimentally test the extent to which the type of content being removed (profanity vs hate speech) and the explanation given for its removal (no explanation vs link to community guidelines vs specific explanation) influence user perceptions of human and algorithmic moderators. Our preregistered study encompasses representative samples ( N = 2870) from the United States, the Netherlands, and Portugal. Contrary to expectations, our findings suggest that algorithmic moderation is perceived as more transparent than human, especially when no explanation is given for content removal. In addition, sending users to community guidelines for further information on content deletion has negative effects on outcome fairness and trust.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.