2024
DOI: 10.1109/taslp.2023.3348762
|View full text |Cite
|
Sign up to set email alerts
|

Text-to-Speech for Low-Resource Agglutinative Language With Morphology-Aware Language Model Pre-Training

Rui Liu,
Yifan Hu,
Haolin Zuo
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 51 publications
0
0
0
Order By: Relevance
“…However, morphological segmentation, which divides words into their smallest semantic units while maintaining semantic information, effectively alleviates the data sparsity issue caused by rich morphology. Therefore, morphological segmentation and stemming are widely used in various downstream natural language processing tasks such as named entity recognition [8], keyword extraction [4], question answering [9], speech recognition [10], machine translation [11,12], and language modeling [3].…”
Section: ‫نىڭكى‬mentioning
confidence: 99%
See 2 more Smart Citations
“…However, morphological segmentation, which divides words into their smallest semantic units while maintaining semantic information, effectively alleviates the data sparsity issue caused by rich morphology. Therefore, morphological segmentation and stemming are widely used in various downstream natural language processing tasks such as named entity recognition [8], keyword extraction [4], question answering [9], speech recognition [10], machine translation [11,12], and language modeling [3].…”
Section: ‫نىڭكى‬mentioning
confidence: 99%
“…Models are trained on a 3090 GPU. The supervised experiment parameters are as follows: character embedding dimension is 128, CNN window sizes are [1,3,5], and BiLSTM hidden layer dimension is 128. The input dimension of the Transformer is 512, with a single-layer Transformer structure and an 8-head attention mechanism.…”
Section: Hyperparametersmentioning
confidence: 99%
See 1 more Smart Citation