Despite the immense societal importance of ethically designing artificial intelligence, little research on the public perceptions of ethical artificial intelligence principles exists. This becomes even more striking when considering that ethical artificial intelligence development has the aim to be human-centric and of benefit for the whole society. In this study, we investigate how ethical principles (explainability, fairness, security, accountability, accuracy, privacy, and machine autonomy) are weighted in comparison to each other. This is especially important, since simultaneously considering ethical principles is not only costly, but sometimes even impossible, as developers must make specific trade-off decisions. In this paper, we give first answers on the relative importance of ethical principles given a specific use case—the use of artificial intelligence in tax fraud detection. The results of a large conjoint survey ([Formula: see text]) suggest that, by and large, German respondents evaluate the ethical principles as equally important. However, subsequent cluster analysis shows that different preference models for ethically designed systems exist among the German population. These clusters substantially differ not only in the preferred ethical principles but also in the importance levels of the principles themselves. We further describe how these groups are constituted in terms of sociodemographics as well as opinions on artificial intelligence. Societal implications, as well as design challenges, are discussed.
In recent years Artificial Intelligence (AI) has gained much popularity, with the scientific community as well as with the public. Often, AI is ascribed many positive impacts for different social domains such as medicine and the economy. On the other side, there is also growing concern about its precarious impact on society and individuals, respectively. Several opinion polls frequently query the public fear of autonomous robots and artificial intelligence, a phenomenon coming also into scholarly focus. As potential threat perceptions arguably vary with regard to the reach and consequences of AI functionalities and the domain of application, research still lacks necessary precision of a respective measurement that allows for wide-spread research applicability. We propose a fine-grained scale to measure threat perceptions of AI that accounts for four functional classes of AI systems and is applicable to various domains of AI applications. Using a standardized questionnaire in a survey study (N = 891), we evaluate the scale over three distinct AI domains (medical treatment, job recruitment, and loan origination). The data support the dimensional structure of the proposed Threats of AI (TAI) scale as well as the internal consistency and factoral validity of the indicators. Implications of the results and the empirical application of the scale are discussed in detail. Recommendations for further empirical use of the TAI scale are provided.
In combating the ongoing global health threat of the COVID-19 pandemic, decision-makers have to take actions based on a multitude of relevant health data with severe potential consequences for the affected patients. Because of their presumed advantages in handling and analyzing vast amounts of data, computer systems of algorithmic decision-making (ADM) are implemented and substitute humans in decision-making processes. In this study, we focus on a specific application of ADM in contrast to human decision-making (HDM), namely the allocation of COVID-19 vaccines to the public. In particular, we elaborate on the role of trust and social group preference on the legitimacy of vaccine allocation. We conducted a survey with a 2 × 2 randomized factorial design among n = 1602 German respondents, in which we utilized distinct decision-making agents (HDM vs. ADM) and prioritization of a specific social group (teachers vs. prisoners) as design factors. Our findings show that general trust in ADM systems and preference for vaccination of a specific social group influence the legitimacy of vaccine allocation. However, contrary to our expectations, trust in the agent making the decision did not moderate the link between social group preference and legitimacy. Moreover, the effect was also not moderated by the type of decision-maker (human vs. algorithm). We conclude that trustworthy ADM systems must not necessarily lead to the legitimacy of ADM systems.
Digital technologies offer new communicative affordances to fight corruption. Bottom-up efforts increasingly use algorithmic tools, i.e., bots, to automate corruption reporting on social media platforms. This study investigates how to design a bot to effectively and responsibly mobilize people for collective action against corruption. In a large (n=1,331) pre-registered choice-based conjoint survey, we test six message design features: type of injustice, degree of injustice, anger, political partisanship, gender, and efficacy cues. Our results show that calling out cases of severe corruption mobilized people against corruption effectively. We find no empirical support for in-group favoritism based on political affiliation and gender. Yet, some commonly used design features can have contrasting effects on different audiences. We call for more social science research accompanying the technical development of algorithmic tools to fight corruption.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.