2006
DOI: 10.1016/j.bspc.2006.06.003
|View full text |Cite
|
Sign up to set email alerts
|

Methodological issues in the development of automatic systems for voice pathology detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
87
0

Year Published

2008
2008
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 140 publications
(87 citation statements)
references
References 26 publications
0
87
0
Order By: Relevance
“…The considered statistics are mean value, standard deviation, skewness and kurtosis, thus each recording will be represented by a total of 40 features (four statistics on ten features). The validation of the system's performance is made by the division of the data into 70% for training and 30% for testing, following the methodology exposed in [17]. The 70% of the data are used for the feature selection and for training the classifier and the remaining 30% of the data are used for testing; the different subsets for training and testing are randomly formed ten times.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The considered statistics are mean value, standard deviation, skewness and kurtosis, thus each recording will be represented by a total of 40 features (four statistics on ten features). The validation of the system's performance is made by the division of the data into 70% for training and 30% for testing, following the methodology exposed in [17]. The 70% of the data are used for the feature selection and for training the classifier and the remaining 30% of the data are used for testing; the different subsets for training and testing are randomly formed ten times.…”
Section: Methodsmentioning
confidence: 99%
“…The features selection process is applied again and the decision about whether a speech recording is from PPD or CS is taken with a SVM. The results are presented according to [17], indicating accuracy rates, specificity and sensitivity. Specificity indicates the probability of a healthy register to be correctly detected and sensitivity is the probability of a pathological signal to be correctly classified.…”
Section: Methodsmentioning
confidence: 99%
“…This result can also be noted in the receiver operating characteristic (ROC) curves that are shown in figure 3. These kind of curves are widely used in clinical applications and the area under such curves (AUC) is considered as a good statistic for representing the general performance of the system [19] …”
Section: Resultsmentioning
confidence: 99%
“…Table 2. Index allocation for features Mean 1 2 3 4 5 6 7 8 9 10 11 12 13 Std 14 15 16 17 18 19 20 21 22 23 24 25 26 Kurtosis 27 28 29 30 31 32 33 34 35 36 37 38 39 Skewness 40 41 42 43 44 45 46 47 48 49 50 The tests performed over the proposed system have been made following the strategy indicated in [19]. The 70% of the data are used for the feature selection and for training the classifier and the remaining 30% is for testing; ten different subsets for training and testing are randomly formed in ten repetitions of the randomization of the data, in order to perform a total of ten independent experiments, each one with its results, allowing the calculation of confidence intervals for the general performance analysis and robustness of the proposed system.…”
Section: Experimental Frameworkmentioning
confidence: 99%
“…A subset of 173 pathological and 53 normal registers has been taken, according to those enumerated by Parsa et al [2]. The asymmetry in the amount of normal and pathological records has not been considered a problem due to the fact that pathological recordings are approximately 1 s long, whereas normal recordings last around 3 s. For a more detailed discussion of this database, see [11].…”
Section: A Databasementioning
confidence: 99%