The research Ethics committee of the Faculty of Pedagogy and Psychology (ELTE) granted a central permission (permission nr: 2019/47). Many other labs obtained IRB approval too, which approvals can be found here: https://osf.io/j6kte/ . Participants had to give informed consent before starting the experiment. Only participants recruited through Mturk or Prolific received monetary compensation.Note that full information on the approval of the study protocol must also be provided in the manuscript.
Moral framing and reframing strategies persuade people holding moralized attitudes (i.e., attitudes having a moral basis). However, these strategies may have unintended side effects: They have the potential to moralize people’s attitudes further and as a consequence lower their willingness to compromise on issues. Across three experimental studies with adult U.S. participants (Study 1: N = 2,151, Study 2: N = 1,590, Study 3: N = 1,015), we used persuasion messages (moral, nonmoral, and control) that opposed new big-data technologies (crime-surveillance technologies and hiring algorithms). We consistently found that moral frames were persuasive and moralized people’s attitudes, whereas nonmoral frames were persuasive and de-moralized people’s attitudes. Moral frames also lowered people’s willingness to compromise and reduced behavioral indicators of compromise. Exploratory analyses suggest that feelings of anger and disgust may drive moralization, whereas perceiving the technologies to be financially costly may drive de-moralization. The findings imply that use of moral frames can increase and entrench moral divides rather than bridge them.
Theories of moralization argue that moral relevance varies due to inter-individual differences, domain differences, or a mix of both. Predictors associated with these sources of variation have been studied in isolation to assess their unique contribution to moralization. Across three studies (N Study1 = 376; N Study2a = 621; N Study2b = 589), assessing attitudes towards new big data technologies, we found that moralization is best explained by theories focusing on inter-individual variation (∼29%) and intraindividual variation across technology domains (∼49%), and less by theories focusing on differences between technology domains (∼6%). We simultaneously examined 15 inter-individual and 16 intra-individual predictors that potentially explain this variation. Predictors directly relevant to the technologies (e.g., justice concerns), cognitive styles (e.g., faith in intuition), and emotional reactions (e.g., anger) best explain variation in moral relevance. Accordingly, scholars should simultaneously adopt and adapt moralization theories related to inter-individual and intra-individual differences across domains rather than in isolation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.