Interspeech 2019 2019
DOI: 10.21437/interspeech.2019-2151
|View full text |Cite
|
Sign up to set email alerts
|

Hypernasality Severity Detection Using Constant Q Cepstral Coefficients

Abstract: In this work, detection of hypernasality severity in cleft palate speech is attempted using constant Q cepstral coefficients (CQCC) feature. The coupling of nasal tract with the oral tract during the production of hypernasal speech adds nasal formants and anti-formants in low frequency region of vowel spectrum mainly around the first formant. The strength and position of nasal formants and anti-formants along with the oral formants changes as the severity of nasality changes in hypernasal speech. The CQCC feat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 29 publications
(35 reference statements)
0
2
0
Order By: Relevance
“…These improvements also demonstrate the effectiveness of our method. Besides, our method is capable of generality and can be applied to any neural networks used in previous works [10,12]. [8] 89.00 82.10 --Table 3: Results of hypernasality estimation on NMCPC and CNH cleft palate datasets.…”
Section: Hypernasality Estimation Accuracymentioning
confidence: 99%
See 1 more Smart Citation
“…These improvements also demonstrate the effectiveness of our method. Besides, our method is capable of generality and can be applied to any neural networks used in previous works [10,12]. [8] 89.00 82.10 --Table 3: Results of hypernasality estimation on NMCPC and CNH cleft palate datasets.…”
Section: Hypernasality Estimation Accuracymentioning
confidence: 99%
“…[9] analyzed multiple different acoustic features for automatic identification of hypernasality. [10,11] also designed some novel acoustic features to better extract hypernasalityrelated semantics. These works mainly focused on extracting or designing advanced acoustic features for hypernasal speech detection.…”
Section: Introductionmentioning
confidence: 99%