2020 5th International Conference on Computer and Communication Systems (ICCCS) 2020
DOI: 10.1109/icccs49078.2020.9118462
|View full text |Cite
|
Sign up to set email alerts
|

Improved Silence-Unvoiced-Voiced (SUV) Segmentation for Dysarthric Speech Signals using Linear Prediction Error Variance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…Before starting to apply the proposed approach, it is first necessary to pre-process the signals as shown in Figure 1. This involves firstly removing silence, then secondly detecting the beginning and the end of the speech by using the zero-crossing rate (ZCR) [16]- [19]. The third step is making them equal in length by using zero padding [20], [21] because the training dataset can contain several signals that do not have the same length.…”
Section: Research Methods 21 Pre-processingmentioning
confidence: 99%
“…Before starting to apply the proposed approach, it is first necessary to pre-process the signals as shown in Figure 1. This involves firstly removing silence, then secondly detecting the beginning and the end of the speech by using the zero-crossing rate (ZCR) [16]- [19]. The third step is making them equal in length by using zero padding [20], [21] because the training dataset can contain several signals that do not have the same length.…”
Section: Research Methods 21 Pre-processingmentioning
confidence: 99%
“…Atal and Rabiner [24] utilized ZCR, STE, the correlation between adjacent speech samples, LPC analysis, and LPC error for the segmentation of voiced and unvoiced speech. Ijitona et al [25] proposed a method based on the combination of linear prediction error variance (LPEV), STE, and ZCR for the segmentation of voiced, unvoiced, and silence regions. In this study, we employed STE to categorize each speech frame into voiced and unvoiced frames.…”
Section: Segmentation Of Voiced and Unvoiced Regionmentioning
confidence: 99%
“…Researchers implemented these segmentations by using different machine learning and clustering techniques. In [19], the author presented a novel approach for segmenting dysarthric speech into silent, unvoiced, and voiced pieces. Short-time energy, zerocrossing rate, and linear prediction error variance are used to solve the segmentation issue in this example.…”
Section: Related Workmentioning
confidence: 99%