Based on the terror management health model (TMHM), we examined the impact of terrorist attacks as reminders of death on implicit alcohol-related attitudes, including the moderating role of conscious death-related thoughts and alcohol-based self-esteem (ABS). With an online experiment ( N = 487), we analyzed how thoughts and memories about a recent terrorist attack unconsciously (with a delay task) and consciously (without a delay task) affected implicit alcohol-related attitudes. We found that such thoughts increased the death-thought accessibility. While no main effect of the salience of the terrorist attack on alcohol-related attitudes existed, respondents with low ABS had more positive attitudes, when unconsciously thinking about the attack as compared to the control group. Respondents with high ABS in the delay task had lower alcohol-IAT scores. Overall, this study provides evidence that thoughts about terrorism that can be provoked through media affect alcohol-related attitudes. Such attitudes may cause negative health consequences through health-related decisions.
When is content on social media offensive enough to warrant content moderation? While social media platforms impose limits to what can be posted, we know little about where users draw the line when it comes to offensive language, and what measures they wish to see implemented when content crosses the boundary of what is deemed acceptable. Conducting randomized experiments with over 5,000 participants we study how different types of offensive language causally affect users' content moderation preferences. We quantify causal effects of uncivil, intolerant, and threatening language by randomly introducing these aspects into fictitious social media posts targeting various social groups. While overall there is limited demand for action against offensive behavior, the severity of the attack matters to the average participant. Amongst our treatments, violent threats cause the greatest support for content moderation of various types, including punishments that would be viewed as censorship in some contexts, such as taking down content or suspending accounts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.