2018 IEEE Life Sciences Conference (LSC) 2018
DOI: 10.1109/lsc.2018.8572136
|View full text |Cite
|
Sign up to set email alerts
|

Parkinson's Disease Diagnosis Based on Multivariate Deep Features of Speech Signal

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(15 citation statements)
references
References 13 publications
0
15
0
Order By: Relevance
“…Behroozi and Sami [20] introduced a multi-classifier framework to separate PD patients from healthy controls. The use of deep learning has also been applied for the classification of voice recordings [21] . Vaiciuknas et al [12] investigated the strategy for PD screening from sustained phoneme parameters and text-dependent speech modalities.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Behroozi and Sami [20] introduced a multi-classifier framework to separate PD patients from healthy controls. The use of deep learning has also been applied for the classification of voice recordings [21] . Vaiciuknas et al [12] investigated the strategy for PD screening from sustained phoneme parameters and text-dependent speech modalities.…”
Section: Introductionmentioning
confidence: 99%
“…Low HNR is an indicator of the existence of dysarthria. Studies have reported the use of other features such as the fractal dimension (FD) [25] linear predictive model (LPM) [13] , multivariate deep features [21] , and entropy [25] . Many of these studies have shown these features are very effective in differentiating between the voices of PD and healthy people.…”
Section: Introductionmentioning
confidence: 99%
“…Over the last few years, with the increase of computing power, several Deep Neural Network (DNN) techniques have emerged in PD detection. Some studies applied Convolutional Neural Networks on spectrograms (Vásquez-Correa et al, 2017;Khojasteh et al, 2018;Zhang et al, 2018). Others used DNNs to extract phonological features from MFCCs (Garcia-Ospina et al, 2018), or to detect directly PD from global features (Rizvi et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…However, CNN and transfer learning techniques were not limited to imaging data; they also learn complex features from voices and signal data [ 29 ]. Numerous studies used the biomedical voice ( n = 21) [ 4 , 6 , 22 , 23 , 29 , 33 , 44 , 48 , 50 , 52 , 53 , 55 , 60 , 61 , 73 , 74 , 84 , 93 , 100 , 104 , 105 ] and biometric signal ( n = 14) [ 26 , 31 , 34 , 36 , 45 , 46 , 57 , 62 , 64 , 65 , 68 , 89 , 96 , 98 ]; a few of the included studies used EEG and EMG signals ( n = 5) [ 32 , 39 , 51 , 83 , 85 ].…”
Section: Resultsmentioning
confidence: 99%