“…Thus, measures of task performance under multisensory conditions show that multiple species can take advantage of the often complementary or redundant sensory information available to them in their environment (Bahrick & Lickliter, 2000; Foxe & Simpson, 2002; Gibson, 1969; Hammond-Kenny, Bajo, King, & Nodal, 2016; Stein, London, Wilkinson, & Price, 1996), allowing them to evolve and adapt to novel ecological niches (Karageorgi et al, 2017). In the case of humans, watching lip and facial movements, hand gestures, head nods, facial configurational (Jaekl, Pesquita, Alsius, Munhall, & Soto-Faraco, 2015) and even feeling the breath of a speaker on your skin (Gick & Derrick, 2009) can all provide additional information to an observer trying to understand what a speaker is saying to them (Ma, Zhou, Ross, Foxe, & Parra, 2009; Ross et al, 2011; Ross, Saint-Amour, Leavitt, Javitt, & Foxe, 2007; Sumby & Pollack, 1954). Even for more basic non-speech stimulus configurations, hearing a sound produced by a visual object is likely to enhance its detectability (Fiebelkorn et al, 2011; Molholm et al, 2002; Van der Burg, Olivers, Bronkhorst, & Theeuwes, 2008).…”