2021
DOI: 10.48550/arxiv.2111.05095
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Speaker Generation

Abstract: This work explores the task of synthesizing speech in nonexistent human-sounding voices. We call this task "speaker generation", and present TacoSpawn, a system that performs competitively at this task. TacoSpawn is a recurrent attentionbased text-to-speech model that learns a distribution over a speaker embedding space, which enables sampling of novel and diverse speakers. Our method is easy to implement, and does not require transfer learning from speaker ID systems. We present objective and subjective metri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…Modern TTS models have great factorization abilities [42,82,89], allowing users to independently change text, prosody, and speaker identity (i.e., what and how something is being said by whom). Harnessing such rich latent features [72] not only facilitates the crafting of new voice personae [33,76], but also ensures that these synthesized voices encapsulate the nuances and diversity inherent to human speech.…”
Section: Robots and Speechmentioning
confidence: 99%
“…Modern TTS models have great factorization abilities [42,82,89], allowing users to independently change text, prosody, and speaker identity (i.e., what and how something is being said by whom). Harnessing such rich latent features [72] not only facilitates the crafting of new voice personae [33,76], but also ensures that these synthesized voices encapsulate the nuances and diversity inherent to human speech.…”
Section: Robots and Speechmentioning
confidence: 99%