Speaker embedding is an important front-end module to explore discriminative speaker features (e.g., X-vector) for many speech applications where speaker information is needed. Current state-of-the-art backbone networks for speaker embedding are designed to aggregate multi-scale features from an utterance with multi-branch network architectures for speaker representation (e.g., ECAPA-TDNN). However, naively adding many branches of multi-scale features with the simple fully convolutional operation could not efficiently improve the performance due to the rapid increase of model parameters and computational complexity. Therefore, in the most current state-of-the-art network architectures, only a few branches corresponding to a limited number of temporal scales could be designed for speaker embeddings. To address this problem, in this paper, we propose an effective temporal multi-scale (TMS) model where multi-scale branches could be efficiently designed in a speaker embedding network almost without increasing computational costs. The new model is based on the conventional time-delay neural network (TDNN), where the network architecture is smartly separated into two modeling operators: a channel-modeling operator and a temporal multi-branch modeling operator. Adding temporal multi-scale in the temporal multi-branch operator needs only a little bit increase of the number of parameters, and thus save more computational budget for adding more branches with large temporal scales. Moreover, after the model was trained, in the inference stage, we further developed a systemic reparameterization method to convert the multi-branch network topology into a single-path-based topology in order to increase inference speed. We investigated the performance of the new TMS method for automatic speaker verification (ASV) on indomain (VoxCeleb) and out-of-domain (CNCeleb) conditions. Results show that the model based on the TMS method obtained a significant increase in the performance over the state-of-the-art ASV models, i.e., ECAPA-TDNN, and meanwhile, had a better model generalization. Moreover, the proposed model achieved a 29% -46% speed up in inference compared to the state-of-theart ECAPA-TDNN.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.