This work processes linear prediction (LP) residual in the time domain at three different levels, extracts speaker information, and demonstrates their significance and also different nature for text-independent speaker recognition. The subsegmental analysis considers LP residual in blocks of 5 msec with shift of 2.5 msec to extract speaker information. The segmental analysis extracts speaker information by processing in blocks of 20 msec with shift of 2.5 msec. The suprasegmental speaker information is extracted by viewing in blocks of 250 msec with shift of 6.25 msec. The speaker identification and verification studies performed using NIST-99 and NIST-03 databases demonstrate that the segmental analysis provides best performance followed by subsegmental analysis. The suprasegmental analysis gives the least performance. However, the evidences from all the three levels of processing seem to be different and combine well to provide improved performance, demonstrating different speaker information captured at each level of processing. Finally, the combined evidence from all the three levels of processing together with vocal tract information further improves the speaker recognition performance.
Limited data speaker verification has shown its significance in practical system oriented applications. The paper shows the importance of different aspects of voice source feature for limited test data scenario. A baseline speaker verification system using conventional mel frequency cepstral co-efficients (MFCC) feature is developed and performance under limited test data condition (≤10 s) is evaluated. A parallel system based on source feature mel power difference of spectrum in subband (M-PDSS) is developed in the i-vector based speaker verification framework. Both the systems were fused at the score level for the cases of short segments of test speech, which demonstrated the importance of source feature with reduction in test data duration. A comparative study of the M-PDSS feature is then made with our earlier work using discrete cosine transform of the integrated linear prediction residual (DCTILPR) feature and then fusion of two source features M-PDSS and DCTILPR along with MFCC features is carried out. An absolute improvement of 5.19% is obtained for 2 s of test data which conveys the significance of multiple source information under limited data speaker verification as it carries different aspects of source information.
Abstract-The objective of this work is to demonstrate the significant speaker information present in the subband energies of the Linear Prediction (LP) residual. The LP residual mostly contains the excitation source information. The subband energies extracted using the mel filterbank followed by cepstral analysis provides a compact representation. The resulting cepstral values are termed as Residual-mel Frequency Cepstral Coefficients (R-MFCC). The speaker identification studies conducted using R-MFCC as features and Gaussian mixture model (GMM) on a subset of 30 speakers from NIST-1999 provides 87% accuracy. The performance using MFCC extracted directly from speech provides 87% accuracy. Further, the combination of the two provides 90% accuracy indicating the different aspect of speaker information present in R-MFCC.
In this paper, the explicit and implicit modelling of the subsegmental excitation information are experimentally compared. For explicit modelling, the static and dynamic values of the standard Liljencrants-Fant (LF) parameters that model the glottal flow derivative (GFD) are used. A simplified approximation method is proposed to compute these LF parameters by locating the glottal closing and opening instants. The proposed approach significantly reduces the computation needed to implement the LF model. For implicit modelling, linear prediction (LP) residual samples considered in blocks of 5 ms with shift of 2.5 ms are used. Different speaker recognition studies are performed using NIST-99 and NIST-03 databases. In case of speaker identification, the implicit modelling provides significantly better performance compared to explicit modelling. Alternatively, the explicit modelling seem to be providing better performance in case of speaker verification. This indicates that explicit modelling seem to have relatively less intra and inter-speaker variability. The implicit modelling on the other hand, has more intra and inter-speaker variability. What is desirable is less intra and more inter-speaker variability. Therefore, for speaker verification task explicit modelling may be used and for speaker identification task implicit modelling may be used. Further, for both speaker identification and verification tasks the explicit modelling provides relatively more complimentary information to the state-of-the-art vocal tract features. The contribution of the explicit features is relatively more robust against noise. We suggest that the explicit approach can be used to model the subsegmental excitation information for speaker recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.