“…If the model 'decides' to open with the Queen's Gambit, this is not a moral decision under any definition of 'morality'. In contrast, the decisions made by an autonomous weapon system (Arkin, 2008a,b;Krishnan, 2009;Tonkens, 2012;Hellström, 2013;Asaro, 2020), a healthcare robot (Anderson et al, 2006;Anderson and Anderson, 2008;Sharkey and Sharkey, 2012;Conti et al, 2017), or an autonomous vehicle (Bhargava and Kim, 2017;Sommaggio and Marchiori, 2018;Evans et al, 2020) may have moral weight. In these cases, the action space may include decision points that we might call 'moral' or 'immoral'-for example, choosing to prioritise one patient over another.…”