“…When trained to adapt to degraded speech signals, typical listeners are able to learn to rely on higher-level top-down information (semantic and lexical knowledge) as well as low-level information (acoustic cues) to better adapt to distorted input (Banai & Lavner, 2012 ; Guediche et al, 2016 ). In typical listeners, the learning of distorted speech generalizes across stimuli that share high-level representations (new talker, same tokens) but also to new items that do not share high-level representations with the trained one (same talker, new tokens) (Banai & Lavner, 2012 , 2014 ; Gabay et al, 2017 ). By contrast, for individuals with DD, such generalization is confined to situations in which trained and untrained information shares the same high-level top-down information (new talker, same tokens) (Gabay et al, 2017 ) but is not observed in situations in which only low-level sub-lexical cues are shared between the trained and untrained information (same talker, new tokens) (Gabay et al, 2017 ; Gabay & Holt, 2021 ).…”