“…The integration of auditory speech and a lipread-or lexical context has been extensively studied with brain imaging and electrophysiological methods (e.g., Callan et al, 2003;Calvert et al, 1997;Calvert & Campbell, 2003;Campbell, 2008;Colin et al, 2002;Holcomb & Neville, 1990;Klucharev, Möttönen, & Sams, 2003;Sams et al, 1991;van Wassenhove, Grant, & Poeppel, 2005). For instance, lipread speech context modulates auditory speech processing as early as 100 msec after stimulus onset as reflected by the attenuation and speeding-up of the N1 component in the ERPs (Besle, Fort, Delpuech, & Giard, 2004;Klucharev et al, 2003;van Wassenhove et al, 2005) whereas lexically induced modulation of auditory speech processing is often reported to occur at around 400 msec (e.g., Holcomb & Neville, 1990). However, there is accumulative evidence that the early effects of lipread speech reflect low-level visual prediction (i.e., the anticipatory visual motion warns the listener about when a sound is going to occur) rather than higher-level phonetic integration that presumably occurs later in time (e.g., Vroomen & Stekelenburg, 2010).…”