“…For teamwork of this sort to succeed when complex tasks are at stake, humans and robots might sometimes need the capacity of theory of mind (or second-order "mental" models) to represent each other's epistemic states (knowledge, belief) and pro-attitudes (desires, goals). Theory of mind comes "live" in the human brain at age three to five (Southgate, 2013;Wellman, Cross, & Watson, 2001) and its role in cooperative human-robot interaction has received considerable attention recently (e.g., Brooks & Szafir, 2019;Devin & Alami, 2016;Görür, Rosman, Hoffman, & Albayrak, 2017;Leyzberg, Spaulding, & Scassellati, 2014;Scassellati, 2002;Zhao, Holtzen, Gao, & Zhu, 2015; for a review, see Tabrez, Luebbers, & Hayes, 2020; for implementations in "moral algorithms," see Tolmeijer, Kneer, Sarasua, Christen, & Bernstein, 2020).…”