12In this paper, we present robust feature extractors that incorporate a regularized minimum variance distortionless response 13 (RMVDR) spectrum estimator instead of the discrete Fourier transform-based direct spectrum estimator, used in many front-ends 14 including the conventional MFCC, to estimate the speech power spectrum. Direct spectrum estimators, e.g., single tapered periodogram, 15 have high variance and they perform poorly under noisy and adverse conditions. To reduce this performance drop we propose to increase 16 the robustness of speech recognition systems by extracting features that are more robust based on the regularized MVDR technique. The 17 RMVDR spectrum estimator has low spectral variance and is robust to mismatch conditions. Based on the RMVDR spectrum estima-18 tor, robust acoustic front-ends, namely, are regularized MVDR-based cepstral coefficients (RMCC), robust RMVDR cepstral coeffi-19 cients (RRMCC) and normalized RMVDR cepstral coefficients (NRMCC). In addition to the RMVDR spectrum estimator, 20 RRMCC and NRMCC also utilize auditory domain spectrum enhancement methods, auditory spectrum enhancement (ASE) and 21 medium duration power bias subtraction (MDPBS) techniques, respectively, to improve the robustness of the feature extraction method. 22 Experimental speech recognition results are conducted on the AURORA-4 large vocabulary continuous speech recognition corpus and 23 performances are compared with the Mel frequency cepstral coefficients (MFCC), perceptual linear prediction (PLP), MVDR spectrum 24 estimator-based MFCC, perceptual MVDR (PMVDR), cochlear filterbank cepstral coefficients (CFCC), power normalized cepstral 25 coefficients (PNCC), ETSI advancement front-end (ETSI-AFE), and the robust feature extractor (RFE) of Alam et al. (2012). 26Experimental results demonstrate that the proposed robust feature extractors outperformed the other robust front-ends in terms of 27 percentage word error rate on the AURORA-4 large vocabulary continuous speech recognition (LVCSR) task under clean and 28 multi-condition training conditions. In clean training conditions, on average, the RRMCC and NRMCC provide significant reductions 29 in word error rate over the rest of the front-ends. In multi-condition training, the RMCC, RRMCC, and NRMCC perform slightly 30 better in terms of the average word error rate than the rest of the front-ends used in this work. 31 Ó 2015 Published by Elsevier B.V.32 33