Abstract-Speaker recognition has been developed and evolved over the past few decades into a supposedly mature technique. Existing methods typically utilize robust features extracted from clean speech. In real-world applications, especially security and forensics related ones, reliability of recognition becomes crucial, meanwhile limited speech samples and adverse acoustic conditions, most notably noise and reverberation, impose further complications. This paper is presented from a study into the behavior of typical speaker recognition systems in adverse retrieval phases. Following a brief review, a speaker recognition system was implemented using the MSR Identity Toolbox by Microsoft. Validation tests were carried out with clean speech and the speech contaminated by noise and/or reverberation of varying degrees. The image source method was adopted to take into account real acoustic conditions in the spaces. Statistical relationships between recognition accuracy and signal to noise ratios or reverberation times have therefore been established. Results show noise and reverberation can, to different extents, degrade the performance of recognition. Both reverberation time and direct to reverberation ratio can affect recognition accuracy. The findings may be used to estimate the accuracy of speaker recognition and further determine the likelihood a particular speaker.
<div><p>In the field of audio classification, audio signals may be broadly divided into three classes: speech, music and events. Most studies, however, neglect that real audio soundtracks can have any combination of these classes simultaneously. In this study, a novel feature, “Entrocy”, is proposed for the detection of music both in pure form and overlapping with the other audio classes. Entrocy is defined as the variation of the information (or entropy) in an audio segment over time. Segments which contain music were found to have lower Entrocy since there are fewer abrupt changes over time.</p></div><p class="Abstract">We have also compared Entrocy with existing music detection features and the entrocy showing a promising performance.</p><p class="IndexTerms"><a name="PointTmp"></a><em>Keywords</em>—Music detection, audio content analysis, audio indexing, Entropy, real world audio classification.</p>
Automatic speaker recognition may achieve remarkable performance in matched training and test conditions. Conversely, results drop significantly in incompatible noisy conditions. Furthermore, feature extraction significantly affects performance. Mel-frequency cepstral coefficients MFCCs are most commonly used in this field of study. The literature has reported that the conditions for training and testing are highly correlated. Taken together, these facts support strong recommendations for using MFCC features in similar environmental conditions (train/test) for speaker recognition. However, with noise and reverberation present, MFCC performance is not reliable. To address this, we propose a new feature 'entrocy' for accurate and robust speaker recognition, which we mainly employ to support MFCC coefficients in noisy environments. Entrocy is the fourier transform of the entropy, a measure of the fluctuation of the information in sound segments over time. Entrocy features are combined with MFCCs to generate a composite feature set which is tested using the gaussian mixture model (GMM) speaker recognition method. The proposed method shows improved recognition accuracy over a range of signal-to-noise ratios.
Background & Objective:
Speaker Recognition (SR) techniques have been developed into
a relatively mature status over the past few decades through development work. Existing methods
typically use robust features extracted from clean speech signals, and therefore in idealized conditions
can achieve very high recognition accuracy. For critical applications, such as security and forensics,
robustness and reliability of the system are crucial.
Methods:
The background noise and reverberation as often occur in many real-world applications are
known to compromise recognition performance. To improve the performance of speaker verification
systems, an effective and robust technique is proposed to extract features for speech processing, capable
of operating in the clean and noisy condition. Mel Frequency Cepstrum Coefficients (MFCCs)
and Gammatone Frequency Cepstral Coefficients (GFCC) are the mature techniques and the most
common features, which are used for speaker recognition. MFCCs are calculated from the log energies
in frequency bands distributed over a mel scale. While GFCC has been acquired from a bank of
Gammatone filters, which was originally suggested to model human cochlear filtering. This paper
investigates the performance of GFCC and the conventional MFCC feature in clean and noisy conditions.
The effects of the Signal-to-Noise Ratio (SNR) and language mismatch on the system performance
have been taken into account in this work.
Conclusion:
Experimental results have shown significant improvement in system performance in
terms of reduced equal error rate and detection error trade-off. Performance in terms of recognition
rates under various types of noise, various Signal-to-Noise Ratios (SNRs) was quantified via simulation.
Results of the study are also presented and discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.