Most Zero-shot Multi-speaker TTS (ZS-TTS) systems support only a single language. Although models like YourTTS, VALL-E X, Mega-TTS 2, and Voicebox explored Multilingual ZS-TTS they are limited to just a few high/medium resource languages, limiting the applications of these models in most of the low/medium resource languages. In this paper, we aim to alleviate this issue by proposing and making publicly available the XTTS system. Our method builds upon the Tortoise model and adds several novel modifications to enable multilingual training, improve voice cloning, and enable faster training and inference. XTTS was trained in 16 languages and achieved state-of-the-art (SOTA) results in most of them.
The ASVspoof Dataset is one of the most established datasets for training and benchmarking systems designed for the detection of spoofed audio and audio deepfakes. However, we observe an uneven distribution of silence length in dataset's training and test data, which hints at the target label: Bona-fide instances tend to have significantly longer leading + trailing silences than spoofed instances. This could be problematic, since a model may learn to only, or at least partially, base its decision on the length of the silence (similar to the issue with the Pascal VOC 2007 dataset, where all images of horses also contained a specific watermark [1]). In this paper, we explore this phenomenon in depth. We train a number of networks on only a) the length of the leading silence and b) with and without leading + trailing silence. Results show that models trained on only the length of the leading silence perform suspiciously well: They achieve up to 85% percent accuracy and an equal error rate (EER) of 0.15 on the 'eval' split of the data. Conversely, when training strong models on the full audio files, we observe that trimming silence during preprocessing dramatically worsens performance (EER increases from 0.03 to 0.15). This could indicate that previous work may, in part, have learned only to classify targets based on the length of silence. Consequently, it could mean that spoofing detection may not be as advanced as previous high-scores have led to believe. We hope that by sharing these results, the ASV community can further evaluate this phenomenon.
In this paper, we propose SC-GlowTTS: an efficient zero-shot multi-speaker text-to-speech model that improves similarity for speakers unseen in training. We propose a speaker-conditional architecture that explores a flow-based decoder that works in a zero-shot scenario. As text encoders, we explore a dilated residual convolutional-based encoder, gated convolutional-based encoder, and transformer-based encoder. Additionally, we have shown that adjusting a GAN-based vocoder for the spectrograms predicted by the TTS model on the training dataset can significantly improve the similarity and speech quality for new speakers. Our model is able to converge in training, using only 11 speakers, reaching state-of-the-art results for similarity with new speakers, as well as high speech quality.
A key requirement for supervised machine learning is labeled training data, which is created by annotating unlabeled data with the appropriate class. Because this process can in many cases not be done by machines, labeling needs to be performed by human domain experts. This process tends to be expensive both in time and money, and is prone to errors. Additionally, reviewing an entire labeled dataset manually is often prohibitively costly, so many real world datasets contain mislabeled instances.To address this issue, we present in this paper a nonparametric end-to-end pipeline to find mislabeled instances in numerical, image and natural language datasets. We evaluate our system quantitatively by adding a small number of label noise to 29 datasets, and show that we find mislabeled instances with an average precision of more than 0.84 when reviewing our system's top 1% recommendation. We then apply our system to publicly available datasets and find mislabeled instances in CIFAR-100, Fashion-MNIST, and others. Finally, we publish the code and an applicable implementation of our approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.