“…It has been well established that in challenging listening conditions, speech can often be understood if there is sufficient semantic or linguistic contextual support available to provide information about the degraded signal ͑e.g., Kalikow et al, 1977;Bilger et al, 1984͒. Further, older adults are particularly adept at using context to compensate for difficulties hearing a degraded acoustic signal, presumably having developed expertise because typical everyday listening conditions are often more perceptually challenging for them than they are for younger adults ͑Perry and Wingfield, 1994;Pichora-Fuller et al, 1995;Gordon-Salant and Fitzgibbons, 1997;Sommers and Danielson, 1999;Wingfield et al, 2005͒. The acoustic signal itself may also support spoken language comprehension by supplementing or augmenting the use of context based on semantic knowledge. Various studies have demonstrated that listeners can use phonological or prosodic information to direct attentional or top-down resources during spoken word recognition ͑Gow and Gordon, 1995;Marslen-Wilson and Tyler, 1980;Pitt and Samuel, 1990͒. Moreover, other situational cues, such as priming with a semantically related sentence ͑e.g., Gagné et al, 2002͒, pre-senting visual speech for speech reading ͑e.g., Sumby and Pollack, 1954͒, presenting written text or clear speech as feedback ͑e.g., Davis et al, 2005͒, spatially separating concurrent sounds ͑e.g., Freyman et al, 1999Freyman et al, , 2001Li et al, 2004͒, and increasing the pitch differences among simultaneous talkers ͑e.g., Mackersie and Prida, 2001͒ can all enhance speech intelligibility.…”