Interspeech 2022 2022
DOI: 10.21437/interspeech.2022-11164
|View full text |Cite
|
Sign up to set email alerts
|

Acoustic-to-articulatory Speech Inversion with Multi-task Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…For our AAI train-val-test split, we hold out 1 male and 1 female speaker for our test set and train on the remaining 6 speakers, as done in [15]. While [15] also used these two speakers in their validation set, we put all of the data from both speakers in the test set in order to fully experiment within the unseen speaker setting.…”
Section: Hprc Ema Datasetmentioning
confidence: 99%
See 4 more Smart Citations
“…For our AAI train-val-test split, we hold out 1 male and 1 female speaker for our test set and train on the remaining 6 speakers, as done in [15]. While [15] also used these two speakers in their validation set, we put all of the data from both speakers in the test set in order to fully experiment within the unseen speaker setting.…”
Section: Hprc Ema Datasetmentioning
confidence: 99%
“…For our AAI train-val-test split, we hold out 1 male and 1 female speaker for our test set and train on the remaining 6 speakers, as done in [15]. While [15] also used these two speakers in their validation set, we put all of the data from both speakers in the test set in order to fully experiment within the unseen speaker setting. We note that our formulation increases the difficulty of the task compared to [15] since hyperparameter tuning would not lead to overfitting on the test speakers.…”
Section: Hprc Ema Datasetmentioning
confidence: 99%
See 3 more Smart Citations