Abstract:Assessing the severity level of dysarthria can provide an insight into the patient's improvement, assist pathologists to plan therapy, and aid automatic dysarthric speech recognition systems. In this article, we present a comparative study on the classification of dysarthria severity levels using different deep learning techniques and acoustic features. First, we evaluate the basic architectural choices such as deep neural network (DNN), convolutional neural network, gated recurrent units and long short-term m… Show more
“…Additionally, the development of evaluation methods that are user-friendly for non-professionals would be necessary. Third, it is important to acknowledge the potential role of artificial intelligence (AI) in neurological assessments [ 18 , 19 ]. With advancements in machine learning techniques, AI has the ability to detect subtle changes in speech characteristics that may indicate neurological conditions like dysarthria.…”
This report proposes a new approach to assess dysarthria in patients with brainstem infarction by involving familiar individuals. Collaboration provides valuable insights compared to subjective traditional methods. A man in his 70s presented with resolved positional vertigo. Standard neurological tests showed no abnormalities, and inquiries with the patient’s friend did not reveal voice changes. While inquiring about voice changes with family, friends, and acquaintances is a common practice in clinical settings, our approach involved the patient calling out to his friend from a distance. Despite the physician detecting no abnormalities, the friend noticed a lower voice. Subsequent magnetic resonance imaging (MRI) confirmed brainstem infarction. Early and subtle symptoms of brainstem infarction pose a detection challenge and can lead to serious outcomes if overlooked. This report provides the first evidence that distance calling can detect subtle voice changes associated with brainstem infarction potentially overlooked by conventional neurological examinations, including inquiries with individuals familiar with the patient’s voice. Detecting brainstem infarction in emergency department cases is often missed, but conducting MRIs on every patient is not feasible. This simple method may identify patients overlooked by conventional screening who should undergo neuroimaging such as MRI. Further research is needed, and involving non-professionals in assessments could significantly advance the diagnostic process.
“…Additionally, the development of evaluation methods that are user-friendly for non-professionals would be necessary. Third, it is important to acknowledge the potential role of artificial intelligence (AI) in neurological assessments [ 18 , 19 ]. With advancements in machine learning techniques, AI has the ability to detect subtle changes in speech characteristics that may indicate neurological conditions like dysarthria.…”
This report proposes a new approach to assess dysarthria in patients with brainstem infarction by involving familiar individuals. Collaboration provides valuable insights compared to subjective traditional methods. A man in his 70s presented with resolved positional vertigo. Standard neurological tests showed no abnormalities, and inquiries with the patient’s friend did not reveal voice changes. While inquiring about voice changes with family, friends, and acquaintances is a common practice in clinical settings, our approach involved the patient calling out to his friend from a distance. Despite the physician detecting no abnormalities, the friend noticed a lower voice. Subsequent magnetic resonance imaging (MRI) confirmed brainstem infarction. Early and subtle symptoms of brainstem infarction pose a detection challenge and can lead to serious outcomes if overlooked. This report provides the first evidence that distance calling can detect subtle voice changes associated with brainstem infarction potentially overlooked by conventional neurological examinations, including inquiries with individuals familiar with the patient’s voice. Detecting brainstem infarction in emergency department cases is often missed, but conducting MRIs on every patient is not feasible. This simple method may identify patients overlooked by conventional screening who should undergo neuroimaging such as MRI. Further research is needed, and involving non-professionals in assessments could significantly advance the diagnostic process.
“…For alleviating the above-mentioned limitations, a number of supportive systems to assess dysarthria via patient's speech or vocalperformance analysis have been proposed in literature. Examples include [12,25,26] which investigate the feasibility of machine learning (ML) methods for the analysis of audio-data collected in hospital and home scenarios. In [27,28], a telemonitoring-based application is introduced to automatically assess the evolution in the intelligibility of the speech of dysarthric patients.…”
Section: Dysarthria Assessmentmentioning
confidence: 99%
“…To search for new quantitative outcome measures to assess dysarthria progress, different approaches were proposed. These mainly monitor the speech and vocal features of dysarthric subjects, both in home and hospital scenarios [7,[10][11][12][13][14]. However, as stated in [15,16], also the assessment of orofacial motor functions related to speech (or motor speech assessment) should be considered to: (i) detect subtle improvements or worsening in patients' conditions (especially for those who suffer from ALS, spinal muscular atrophy (SMA), facial palsy and stroke); (ii) evaluate pharmacological and non-pharmacological treatment progress and (iii) improve the staging of the rehabilitative strategies management and pursue an augmentative communication (AAC) assessment [1].…”
“…These manifestations include diminished vocal volume, imprecise articulation, disturbances in coordinating respiratory and phonatory subsystems, and the presence of irregular speech pauses. The amalgamation of these defining attributes underscores the multifaceted nature of this speech disorder (Joshy and Rajan, 2022 ).…”
Neurological disorders include various conditions affecting the brain, spinal cord, and nervous system which results in reduced performance in different organs and muscles throughout the human body. Dysarthia is a neurological disorder that significantly impairs an individual's ability to effectively communicate through speech. Individuals with dysarthria are characterized by muscle weakness that results in slow, slurred, and less intelligible speech production. An efficient identification of speech disorders at the beginning stages helps doctors suggest proper medications. The classification of dysarthric speech assumes a pivotal role as a diagnostic tool, enabling accurate differentiation between healthy speech patterns and those affected by dysarthria. Achieving a clear distinction between dysarthric speech and the speech of healthy individuals is made possible through the application of advanced machine learning techniques. In this work, we conducted feature extraction by utilizing the Amplitude and frequency modulated (AFM) signal model, resulting in the generation of a comprehensive array of unique features. A method involving Fourier-Bessel series expansion is employed to separate various components within a complex speech signal into distinct elements. Subsequently, the Discrete Energy Separation Algorithm is utilized to extract essential parameters, namely the Amplitude envelope and Instantaneous frequency, from each component within the speech signal. To ensure the robustness and applicability of our findings, we harnessed data from various sources, including TORGO, UA Speech, and Parkinson datasets. Furthermore, the classifier's performance was evaluated based on multiple measures such as the area under the curve, F1-Score, sensitivity, and accuracy, encompassing KNN, SVM, LDA, NB, and Boosted Tree. Our analyses resulted in classification accuracies ranging from 85 to 97.8% and the F1-score ranging between 0.90 and 0.97.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.