2016
DOI: 10.48550/arxiv.1606.06061
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fast, Compact, and High Quality LSTM-RNN Based Statistical Parametric Speech Synthesizers for Mobile Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…Another direction focuses on efficient storage and representation of weights. Various techniques, such as weight sharing within Toeplitz matrices [19], weight tying through effective hashing [20], and appropriate weight quantization [21][22][23], can greatly reduce model size, in some cases at the expense of a slight performance degradation.…”
Section: Related Workmentioning
confidence: 99%
“…Another direction focuses on efficient storage and representation of weights. Various techniques, such as weight sharing within Toeplitz matrices [19], weight tying through effective hashing [20], and appropriate weight quantization [21][22][23], can greatly reduce model size, in some cases at the expense of a slight performance degradation.…”
Section: Related Workmentioning
confidence: 99%
“…Since production speech synthesis systems have very low tolerance on such instability, end-to-end speech synthesis systems have not been widely deployed in practical applications. In DurIAN, we replace the attention mechanism with an alignment model [15,16], in which the alignment between the phoneme sequence and the target acoustic sequence is inferred from a phoneme duration prediction model. The duration of each phoneme is measured by the number of aligned acoustic frames.…”
Section: Alignment Modelmentioning
confidence: 99%
“…Non-pruning methods. In addition to pruning, other approaches also make significant contribution to LSTMs compression, including distillation (Tian et al, 2017), matrix factorization (Kuchaiev & Ginsburg, 2017), parameter sharing (Lu et al, 2016), group Lasso regularization (Wen et al, 2017), weight quantization (Zen et al, 2016), etc.…”
Section: Lstm Compressionmentioning
confidence: 99%