Background Access to neurological care for Parkinson disease (PD) is a rare privilege for millions of people worldwide, especially in resource-limited countries. In 2013, there were just 1200 neurologists in India for a population of 1.3 billion people; in Africa, the average population per neurologist exceeds 3.3 million people. In contrast, 60,000 people receive a diagnosis of PD every year in the United States alone, and similar patterns of rising PD cases—fueled mostly by environmental pollution and an aging population—can be seen worldwide. The current projection of more than 12 million patients with PD worldwide by 2040 is only part of the picture given that more than 20% of patients with PD remain undiagnosed. Timely diagnosis and frequent assessment are key to ensure timely and appropriate medical intervention, thus improving the quality of life of patients with PD. Objective In this paper, we propose a web-based framework that can help anyone anywhere around the world record a short speech task and analyze the recorded data to screen for PD. Methods We collected data from 726 unique participants (PD: 262/726, 36.1% were women; non-PD: 464/726, 63.9% were women; average age 61 years) from all over the United States and beyond. A small portion of the data (approximately 54/726, 7.4%) was collected in a laboratory setting to compare the performance of the models trained with noisy home environment data against high-quality laboratory-environment data. The participants were instructed to utter a popular pangram containing all the letters in the English alphabet, “the quick brown fox jumps over the lazy dog.” We extracted both standard acoustic features (mel-frequency cepstral coefficients and jitter and shimmer variants) and deep learning–based embedding features from the speech data. Using these features, we trained several machine learning algorithms. We also applied model interpretation techniques such as Shapley additive explanations to ascertain the importance of each feature in determining the model’s output. Results We achieved an area under the curve of 0.753 for determining the presence of self-reported PD by modeling the standard acoustic features through the XGBoost—a gradient-boosted decision tree model. Further analysis revealed that the widely used mel-frequency cepstral coefficient features and a subset of previously validated dysphonia features designed for detecting PD from a verbal phonation task (pronouncing “ahh”) influence the model’s decision the most. Conclusions Our model performed equally well on data collected in a controlled laboratory environment and in the wild across different gender and age groups. Using this tool, we can collect data from almost anyone anywhere with an audio-enabled device and help the participants screen for PD remotely, contributing to equity and access in neurological care.
Rice husk ash (RHA), is a widely available biobased source for high purity silica. In this work, zeolite Faujasite (FAU) is synthesized using extracted silica from RHA (collected from a local region of Bangladesh). The synthesized zeolite FAU was used as an adsorbent for Cr(VI) and Pb (II) batch-wise adsorptive removal from respective aqueous solution. The synthesized zeolite FAU was characterized using X-ray diffraction (XRD), scanning electron microscopy (SEM), nitrogen-sorption, and Fourier transfer infrared (FT-IR) spectroscopy. Metal ion adsorption studies were performed by varying metal concentration (20–100 mg/L for Cr(VI) and 900–133 mg/L for Pb(II)), sorbent dosage (2–10 g/L for chromium and 0.5–1.5 g/L for lead), and contact time (10–120 min for both metal ions). The maximum adsorption capacity of RHA-based zeolite FAU was found to be 3.56 mg/g and 342.16 mg/g for Cr(VI) and Pb(II), respectively. Since the sorption data was found to match with Langmuir isotherm, a monolayer adsorption was occurred. The regeneration of the RHA-based zeolite FAU by NaCl solution showed the potential of repeated as well as continuous operation.
BACKGROUND Access to neurological care—especially for Parkinson's disease (PD)—is a rare privilege for millions of people worldwide, especially in developing countries. In 2013, there were just 1200 neurologists in India for a population of 1.3 billion; the average population per neurologist exceeds 3.3 million in Africa. On the other hand, 60,000 people are diagnosed with Parkinson's disease (PD) every year in the US alone, and similar patterns of rising PD cases — fueled mostly by environmental pollution and an aging population can be seen worldwide. The current projection of more than 12 million PD patients worldwide by 2040 is only part of the picture since more than 20% of PD patients remain undiagnosed. Timely diagnosis and frequent assessment are keys to ensure timely and appropriate medical intervention, improving the quality of life for a PD patient. OBJECTIVE In this paper, we envision a web-based framework that can help anyone, anywhere around the world record a short speech task, and analyze the recorded data to screen for Parkinson’s disease (PD). METHODS We collected data from 726 unique participants (262 PD, 38% female; 464 non-PD, 65% female; average age: 61) – from all over the US and beyond. A small portion of the data (roughly 7%) was collected in a lab setting to compare quality. The participants were instructed to utter a popular pangram containing all the letters in the English alphabet “the quick brown fox jumps over the lazy dog”. We extracted both standard acoustic features (Mel Frequency Cepstral Coefficients (MFCC), jitter and shimmer variants) and deep learning-based features from the speech data. Using these features, we trained several machine learning algorithms. We also applied model interpretation techniques like SHAP (SHapley Additive exPlanations) to find out the importance of each feature in determining the model’s output. RESULTS We achieved 0.75 AUC (Area Under the Curve) performance on determining presence of self-reported Parkinson’s disease by modeling the standard acoustic features through the XGBoost – a gradient-boosted decision tree model. Further analysis reveals that the widely used MFCC features and a subset of previously validated dysphonia features designed for detecting Parkinson’s from verbal phonation task (pronouncing ‘ahh’) influence the model’s decision most. CONCLUSIONS Our model performed equally well on data collected in controlled lab environment as well as ‘in the wild’ across different gender and age groups. Using this tool, we can collect data from almost anyone anywhere with a video/audio enabled device, contributing to equity and access in neurological care.
Many patients with neurological disorders, such as Ataxia, do not have easy access to neurologists, -especially those living in remote localities and developing/underdeveloped countries. Ataxia is a degenerative disease of the nervous system that surfaces as difficulty with motor control, such as walking imbalance. Previous studies have attempted automatic diagnosis of Ataxia with the help of wearable biomarkers, Kinect, and other sensors. These sensors, while accurate, do not scale efficiently well to naturalistic deployment settings. In this study, we propose a method for identifying ataxic symptoms by analyzing videos of participants walking down a hallway, captured with a standard monocular camera. In a collaboration with 11 medical sites located in 8 different states across the United States, we collected a dataset of 155 videos along with their severity rating from 89 participants (24 controls and 65 diagnosed with or are pre-manifest spinocerebellar ataxias). The participants performed the gait task of the Scale for the Assessment and Rating of Ataxia (SARA). We develop a computer vision pipeline to detect, track, and separate the participants from their surroundings and construct several features from their body pose coordinates to capture gait characteristics such as step width, step length, swing, stability, speed, etc. Our system is able to identify and track a patient in complex scenarios. For example, if there are multiple people present in the video or an interruption from a passerby. Our Ataxia risk-prediction model achieves 83.06% accuracy and an 80.23% F1 score. Similarly, our Ataxia severity-assessment model achieves a mean absolute error (MAE) score of 0.6225 and a Pearson's correlation coefficient score of 0.7268. Our model competitively performed when evaluated on data from medical sites not used during training. Through feature importance analysis, we found that our models associate wider steps, decreased walking speed, and increased instability with greater Ataxia severity, which is consistent with previously established clinical knowledge. Furthermore, we are releasing the models and the body-pose coordinate dataset to the research community - the largest dataset on ataxic gait (to our knowledge). Our models could contribute to improving health access by enabling remote Ataxia assessment in non-clinical settings without requiring any sensors or special cameras. Our dataset will help the computer science community to analyze different characteristics of Ataxia and to develop better algorithms for diagnosing other movement disorders.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.