Aim: This paper introduces Apkinson, a mobile application for motor evaluation and monitoring of Parkinson’s disease patients. Materials & methods: The App is based on previously reported methods, for instance, the evaluation of articulation and pronunciation in speech, regularity and freezing of gait in walking, and tapping accuracy in hand movement. Results: Preliminary experiments indicate that most of the measurements are suitable to discriminate patients and controls. Significance is evaluated through statistical tests. Conclusion: Although the reported results correspond to preliminary experiments, we think that Apkinson is a very useful App that can help patients, caregivers and clinicians, in performing a more accurate monitoring of the disease progression. Additionally, the mobile App can be a personal health assistant.
Background and objectives: Parkinson's disease is a neurological disorder that affects the motor system producing lack of coordination, resting tremor, and rigidity. Impairments in handwriting are among the main symptoms of the disease. Handwriting analysis can help in supporting the diagnosis and in monitoring the progress of the disease. This paper aims to evaluate the importance of different groups of features to model handwriting deficits that appear due to Parkinson's disease; and how those features are able to discriminate between Parkinson's disease patients and healthy subjects. Methods: Features based on kinematic, geometrical and non-linear dynamics analyses were evaluated to classify Parkinson's disease and healthy subjects. Classifiers based on K-nearest neighbors, support vector machines, and random forest were considered. Results: Accuracies of up to 93.1% were obtained in the classification of patients and healthy control subjects. A relevance analysis of the features indicated that those related to speed, acceleration, and pressure are the most discriminant. The automatic classification of patients in different stages of the disease shows κ indexes between 0.36 and 0.44. Accuracies of up to 83.3% were obtained in a different dataset used only for validation purposes. Conclusions: The results confirmed the negative impact of aging in the classification process when we considered different groups of healthy subjects. In addition, the results reported with the separate validation set comprise a step towards the development of automated tools to support the diagnosis process in clinical practice.
This paper focuses on selecting features that can best represent the pathophysiology of Parkinson's disease (PD) dysarthria. PD dysarthria has often been the subject of feature selection and classification experiments, but rarely have the selected features been attempted to be matched to the pathophysiology of PD dysarthria. PD dysarthria manifests through changes in control of a person's speech production muscles and affects respiration, articulation, resonance, and laryngeal properties, resulting in speech characteristics such as short phrases separated by pauses, reduced speed for non-repetitive syllables or supernormal speed of repetitive syllables, reduced resonance, irregular vowel generation, etc. Articulation, phonation, diadochokinesis (DDK) rhythm, and Empirical Mode Decomposition (EMD) features were extracted from the DDK and sustained /a/ recordings of the Spanish GITA Corpus. These recordings were captured from 50 healthy (HC) and 50 PD subjects. A two-stage filter-wrapper feature selection process was applied to reduce the number of features from 3,534 to 15. These 15 features mainly represent the instability of the voice and rhythm. SVM, Random Forest and Naive Bayes were used to test the discriminative power of the selected features. The results showed that these sustained /a/ and /pa-ta-ka/ stability features could successfully discriminate PD from HC with 70% accuracy.
Parkinson's disease patients develop different speech impairments that affect their communication capabilities. The automatic assessment of the speech of the patients allows the development of computer aided tools to support the diagnosis and the evaluation of the disease severity. This paper introduces a methodology to classify Parkinson's disease from speech in three different languages: Spanish, German, and Czech. The proposed approach considers convolutional neural networks trained with time frequency representations and a transfer learning strategy among the three languages. The transfer learning scheme aims to improve the accuracy of the models when the weights of the neural network are initialized with utterances from a different language than the used for the test set. The results suggest that the proposed strategy improves the accuracy of the models in up to 8% when the base model used to initialize the weights of the classifier is robust enough. In addition, the results obtained after the transfer learning are in most cases more balanced in terms of specificity-sensitivity than those trained without the transfer learning strategy.
Speech impairments are one of the earliest manifestations in patients with Parkinson's disease. Particularly, articulation impairments related to the capability of the speaker to move the limbs and muscles of the vocal tract have been observed in the patients. Articulation deficits have been evaluated in the patients mainly using diadochokinetic exercises, which consist in the rapid repetition of syllables like /pa-ta-ka/. This study considered different features to model several aspects of the diadochokinetic exercises, including the capacity to start/stop the vocal fold vibration, the speech rate, and the regularity of the diadochokinetic task. Articulation features are combined with others that result from an empirical mode decomposition procedure, which have been recently used to model dysphonia in Parkinson's patients. The features are used to classify Parkinson's patients and healthy speakers, and to predict the dysarthria severity of the participants according to a clinical scale. According to the results, articulation features are able to classify the presence of the disease with an accuracy up to 76%, and to predict the dysarthria level of the speakers with a Spearman's correlation of up to 0.68.
Parkinson’s disease (PD) is the second most prevalent neurodegenerative disorder in the world, and it is characterized by the production of different motor and non-motor symptoms which negatively affect speech and language production. For decades, the research community has been working on methodologies to automatically model these biomarkers to detect and monitor the disease; however, although speech impairments have been widely explored, language remains underexplored despite being a valuable source of information, especially to assess cognitive impairments associated with non-motor symptoms. This study proposes the automatic assessment of PD patients using different methodologies to model speech and language biomarkers. One-dimensional and two-dimensional convolutional neural networks (CNNs), along with pre-trained models such as Wav2Vec 2.0, BERT, and BETO, were considered to classify PD patients vs. Healthy Control (HC) subjects. The first approach consisted of modeling speech and language independently. Then, the best representations from each modality were combined following early, joint, and late fusion strategies. The results show that the speech modality yielded an accuracy of up to 88%, thus outperforming all language representations, including the multi-modal approach. These results suggest that speech representations better discriminate PD patients and HC subjects than language representations. When analyzing the fusion strategies, we observed that changes in the time span of the multi-modal representation could produce a significant loss of information in the speech modality, which was likely linked to a decrease in accuracy in the multi-modal experiments. Further experiments are necessary to validate this claim with other fusion methods using different time spans.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.