Abstract:Partisan incivility is prevalent in news comments, but we have limited insight into how journalists and news users engage with it. Gatekeeping, cognitive bias, and social identity theories suggest that journalists may tolerate incivility while users actively promote partisan incivility. Using 9.6 million comments from The New York Times, we analyze whether the presence of uncivil and partisan terms affects how journalists and news users engage with comments. Results show that partisanship and incivility increa… Show more
“…Beyond assessments of credibility, an abusive comment may also affect behaviors such as news-seeking by making women authors' violations of gendered social expectations (and the outlet for employing her) more salient. Previous work focuses on behaviors in the comment section: Work by Muddiman and Stroud (2017) finds that when incivility was partisan, depending on the congeniality of views expressed, users responded by recommending or flagging the comment. Similarly, other studies find that users are more likely to engage with a comment when it contains profanity (Kwon & Cho, 2017).…”
Section: Conditional Gender Effects For Abusive Commentsmentioning
Recent work suggests women authors experience more abuse in online comments than men, but we do not know whether these abusive comments affect people's perceptions. Given renewed interest in the experience of women online, we ask: does exposure to abusive comments affect perceptions of women authors' credibility? And does this penalty extend to the outlet? To answer these questions, we employed a survey experiment which manipulated exposure to an abusive comment, and author gender. We found a significant effect for the abusive comment on author credibility and intention to seek news from the author and outlet in the future, but gender of the author did not moderate these effects. To ensure the null effects for gender were not an artifact of comment or topic, we fielded two additional survey experiments. Across topics, whether the abuse was gendered or gender-specific, we found abusive comments exert significant negative effects on evaluations, regardless of author gender. Our results have implications for news organizations considering comments.
“…Beyond assessments of credibility, an abusive comment may also affect behaviors such as news-seeking by making women authors' violations of gendered social expectations (and the outlet for employing her) more salient. Previous work focuses on behaviors in the comment section: Work by Muddiman and Stroud (2017) finds that when incivility was partisan, depending on the congeniality of views expressed, users responded by recommending or flagging the comment. Similarly, other studies find that users are more likely to engage with a comment when it contains profanity (Kwon & Cho, 2017).…”
Section: Conditional Gender Effects For Abusive Commentsmentioning
Recent work suggests women authors experience more abuse in online comments than men, but we do not know whether these abusive comments affect people's perceptions. Given renewed interest in the experience of women online, we ask: does exposure to abusive comments affect perceptions of women authors' credibility? And does this penalty extend to the outlet? To answer these questions, we employed a survey experiment which manipulated exposure to an abusive comment, and author gender. We found a significant effect for the abusive comment on author credibility and intention to seek news from the author and outlet in the future, but gender of the author did not moderate these effects. To ensure the null effects for gender were not an artifact of comment or topic, we fielded two additional survey experiments. Across topics, whether the abuse was gendered or gender-specific, we found abusive comments exert significant negative effects on evaluations, regardless of author gender. Our results have implications for news organizations considering comments.
“…People can add to a comment section "echo chamber" with comments that further reinforce one-sided messaging (Suhay, Blackwell, Roche, & Bruggeman, 2015). Polarized content in comment sections may also amplify further polarized comments (Muddiman & Stroud, 2017). What is more, online political dialogue has a tendency to increase political polarization and extreme viewpoints (Hwang, Kim, & Huh, 2014).…”
People are often exposed to polarized viewpoints in web comment sections. Inspired by attribution theory and framing theory, this article tests the effects of comments that frame a politician or a journalist as triggering evasiveness in a media interview. We compare attributions ascribing deceptiveness to the politician versus external attributions implicating the media situation. In the first experiment, comment sections affect perceptions of evasiveness, credibility of the politician relative to the journalist, and people’s attitudes toward the politician and journalist. A second study replicates, and voters type comments which largely reflect the comments to which they were exposed. Also, perceptions of external control by the journalist affect perceptions of the politician. The article extends attribution theory and framing theory via commonly encountered online exposure which affects people’s perceptions of politicians as deceptive relative to their journalistic arbiters.
“…Incivility is difficult to define because the decision of what is civil and uncivil is subjectively shaped (Coe et al, 2014;Herbst, 2010). Therefore, achieving consensus about where to draw the line between civil and uncivil discourse is a complex problem (Muddiman, 2017;Stryker, Conway, & Danielson, 2016). Scholars have defined incivility as the communication of disagreement combined with a dismissive, disrespectful, aggressive, or hostile tone (Coe et al, 2014;Hwang, Kim, & Kim, 2016).…”
Section: Theory: Incivility and Impolitenessmentioning
confidence: 99%
“…Researchers in social sciences have argued that using such language can be considered a violation of democratic and social norms (Muddiman & Stroud, 2017). They have, therefore, used the term 'incivility' to describe different forms of disrespectful and harmful language (e.g., Coe, Kenski, & Rains, 2014;Muddiman & Stroud, 2017). Previous studies have reported various negative effects of uncivil comments on the readers of online discussions.…”
Section: Introductionmentioning
confidence: 99%
“…Still, from a psychological viewpoint, these forms could affect the attitudes of readers even stronger than obvious forms of offensive language (Kalch & Naab, 2016;Papacharissi, 2004). Based on previous theoretical work on incivility (Muddiman & Stroud, 2017;Papacharissi, 2004), we therefore train classifiers on both impolite comments-postings that are offensive but not necessarily harmful to other users-and 'truly' uncivil comments, which often include subtle forms of racism, extremism, and undemocratic appeals (e.g., Kalch & Naab, 2017).…”
Impoliteness and incivility in online discussions have recently been discussed as relevant issues in communication science. However, automatically detecting these concepts with computational methods is challenging. In our study, we build and compare supervised classification models to predict impoliteness and incivility in online discussions on German media outlets on Facebook. Using a sample of 10,000 hand-coded user comments and a theory-grounded coding scheme, we develop classifiers on different feature sets including unigram and n-gram distributions as well as various dictionary-based features. Our findings show that impoliteness and incivility can be measured to a certain extent on the word level of a comment, but the models suffer from high misclassification rates, even if lexical resources are included. This is mainly because the classifiers cannot reveal subtle forms of incivility and because comment authors often use predictive words of incivility or impoliteness in non-offensive ways or in different contexts. Still, when applying the classifiers to a comparable set of comments, we find that the machine-coded categories and the hand-coded categories reveal similar patterns regarding the distribution of and the user reactions to uncivil/impolite comments. The findings of our study therefore provide new insights into the supervised machine learning approach to the detection of different forms of offensive language.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.