“…Yet one might correctly point out that many more objections to algorithms have recently appeared in the algorithmic ethics literature (Birhane, 2021; Hunkenschroer & Luetge, 2022; Martin, 2019; Müller, 2021; Tasioulas, 2019; Tsamados et al, 2022). For example, there are concerns related to algorithms systemically excluding certain individuals (Creel & Hellman, 2022), eliciting organizational monocultures (Kleinberg & Raghavan, 2021), or disproportionately harming marginalized groups (Birhane, 2021); worries related to the legitimacy and trustworthiness of algorithms (Benn & Lazar, 2022; Martin & Waldman, 2022; Tong, Jia, Luo, & Fang, 2021) and the lack of explainability in the case of opaque algorithms (Anthony, 2021; Kim & Routledge, 2022; Lu, Lee, Kim, & Danks, 2020; Rahman, 2021; Rudin, 2019; Selbst & Powles, 2017; Véliz, Prunkl, Phillips-Brown, & Lechterman, 2021; Wachter, Mittelstadt, & Floridi, 2017); 13 issues related to whether algorithms preclude us from taking people seriously as individuals (Lippert-Rasmussen, 2011; Susser, 2021); and concerns related to whether automated systems create responsibility or accountability gaps (Bhargava & Velasquez, 2019; Danaher, 2016; Himmelreich, 2019; Nyholm, 2018; Roff, 2013; Simpson & Müller, 2016; Sparrow, 2007; Tigard, 2021), among other concerns (Bedi, 2021; Tasioulas, 2019; Tsamados et al, 2022; Yam & Skorburg, 2021). In short, there’s now a rich literature involving a wide range of concerns related to adopting algorithms in lieu of human decision makers (Hunkenschroer & Luetge, 2022; Martin, 2022; Müller, 2021; Tsamados et al, 2022).…”