“…Classical modelling approaches, like the AI and the SII, have been adapted to account for hearing impairment, but rely solely on the information provided by the audiogram and have thus only limited applicability (Pavlovic et al, 1986;Payton and Uchanski, 1994;Rhebergen et al, 2010;Meyer and Brand, 2013). On the other hand, sophisticated automatic speech recognition (ASR) based approaches, like the Framework for Auditory Discrimination Experiments (FADE, Sch€ adler et al, 2015;Sch€ adler et al, 2016), while powerful predictors of NH speech intelligibility, offer only limited insights into the involved auditory processes since the cue extraction from the internal representations of the signals is delegated to a highly trained ASR whose performance relies on the amount and type of (over-)training and less on the actual importance of the selected features for human listeners. Furthermore, such models require explicit individualized fitting of parameters in order to account for HI data (Kollmeier et al, 2016).…”