“…Humans often categorize others as belonging to distinct social groups, distinguishing "us" versus "them", and this categorization influences cooperation, with decisions tending to favor ingroup members and, at times, discriminating against out-group members [1][2][3][4]. As autonomous machines-such as self-driving cars, drones, and robots-become pervasive in society [5][6][7], it is important we understand whether humans also apply social categories when engaging with these machines, if decision making is shaped by these categories and, if so, how to overcome unfavorable biases to promote cooperation between humans and machines. Here we show that, when deciding whether to cooperate with a machine, people engage, by default, in social categorization that is unfavorable to machines but, it is possible to override this bias by having machines communicate cues of affiliative intent.…”