We describe the Saarland University submission to the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2019 Conference on Computational Natural Language Learning (CoNLL).
Effective management of dementia hinges on timely detection and precise diagnosis of the underlying cause of the syndrome at an early mild cognitive impairment (MCI) stage. Verbal fluency tasks are among the most often applied tests for early dementia detection due to their efficiency and ease of use. In these tasks, participants are asked to produce as many words as possible belonging to either a semantic category (SVF task) or a phonemic category (PVF task). Even though both SVF and PVF share neurocognitive function profiles, the PVF is typically believed to be less sensitive to measure MCI-related cognitive impairment and recent research on finegrained automatic evaluation of VF tasks has mainly focused on the SVF. Contrary to this belief, we show that by applying state-of-theart semantic and phonemic distance metrics in automatic analysis of PVF word productions, in-depth conclusions about production strategy of MCI patients are possible. Our results reveal a dissociation between semantically-and phonemically-guided search processes in the PVF. Specifically, we show that subjects with MCI rely less on semantic-and more on phonemic processes to guide their word production as compared to healthy controls (HC). We further show that semantic similarity-based features improve automatic MCI versus HC classification by 29% over previous approaches for the PVF. As such, these results point towards the yet underexplored utility of the PVF for indepth assessment of cognition in MCI.
Background Automated speech analysis has gained increasing attention to help diagnosing depression. Most previous studies, however, focused on comparing speech in patients with major depressive disorder to that in healthy volunteers. An alternative may be to associate speech with depressive symptoms in a non-clinical sample as this may help to find early and sensitive markers in those at risk of depression. Methods We included n = 118 healthy young adults (mean age: 23.5 ± 3.7 years; 77% women) and asked them to talk about a positive and a negative event in their life. Then, we assessed the level of depressive symptoms with a self-report questionnaire, with scores ranging from 0–60. We transcribed speech data and extracted acoustic as well as linguistic features. Then, we tested whether individuals below or above the cut-off of clinically relevant depressive symptoms differed in speech features. Next, we predicted whether someone would be below or above that cut-off as well as the individual scores on the depression questionnaire. Since depression is associated with cognitive slowing or attentional deficits, we finally correlated depression scores with performance in the Trail Making Test. Results In our sample, n = 93 individuals scored below and n = 25 scored above cut-off for clinically relevant depressive symptoms. Most speech features did not differ significantly between both groups, but individuals above cut-off spoke more than those below that cut-off in the positive and the negative story. In addition, higher depression scores in that group were associated with slower completion time of the Trail Making Test. We were able to predict with 93% accuracy who would be below or above cut-off. In addition, we were able to predict the individual depression scores with low mean absolute error (3.90), with best performance achieved by a support vector machine. Conclusions Our results indicate that even in a sample without a clinical diagnosis of depression, changes in speech relate to higher depression scores. This should be investigated in more detail in the future. In a longitudinal study, it may be tested whether speech features found in our study represent early and sensitive markers for subsequent depression in individuals at risk.
Objective: Semantic verbal fluency (SVF) tasks require individuals to name items from a specified category within a fixed time. An impaired SVF performance is well documented in patients with amnestic Mild Cognitive Impairment (aMCI). The two leading theoretical views suggest either loss of semantic knowledge or impaired executive control to be responsible. Method: We assessed SVF 3 times on 2 consecutive days in 29 healthy controls (HC) and 29 patients with aMCI with the aim to answer the question which of the two views holds true. Results: When doing the task for the first time, patients with aMCI produced fewer and more common words with a shorter mean response latency. When tested repeatedly, only healthy volunteers increased performance. Likewise, only the performance of HC indicated two distinct retrieval processes: a prompt retrieval of readily available items at the beginning of the task and an active search through semantic space towards the end. With repeated assessment, the pool of readily available items became larger in HC, but not patients with aMCI. Conclusion: The production of fewer and more common words in aMCI points to a smaller search set and supports the loss of semantic knowledge view. The failure to improve performance as well as the lack of distinct retrieval processes point to an additional impairment in executive control. Our data did not clearly favour one theoretical view over the other, but rather indicates that the impairment of patients with aMCI in SVF is due to a combination of both.
BackgroundEven if classic neuropsychological tests often have excellent psychometric properties to detect Mild Cognitive Impairment (MCI), they are not suitable for cost‐effective low‐burden screening at scale. Speech‐based digital biomarkers can be deployed in a highly automated fashion. We present the results of an MCI screening algorithm based on a digital Speech Biomarker for Cognition (SB‐C) in the Swedish H70 birth cohort study.MethodWe used a sample from the Swedish H70 Birth Cohort study (N = 404; 356 cognitively healthy (HC), 48 MCI). We automatically extract the SB‐C score and its subscores (executive function, memory, semantic memory, processing speed) from SVF and RAVLT speech recordings using ki:elements’ proprietary speech analysis pipeline including automatic speech recognition and feature extraction. We performed (1) inferential statistics comparing MCI and HC group based on the biomarker scores and (2) built a machine learning model to screen for MCI. For (1) we performed a non‐parametric Kruskal‐Wallis test to compare SB‐C scores of both HC and MCI groups to check for general feasibility. For (2), we trained a support vector machine model with class weights and leave‐one‐out cross validation to classify between MCI and HC using the SB‐C scores as input (overall score and the subscores).ResultThere was a group difference for the SB‐C aggregated cognition score between the groups (HC > MCI; χ2 = 45.9 (1), p <0.001; Figure 1), and also for the subscores (Table 2). To classify between MCI and HC, using a feature selection method, the best model was found for all the five biomarker scores selected with an Area Under Curve of 0.77 (Figure 2), a specificity of 0.77 and a sensitivity of 0.76 (Table 3).ConclusionWe found that a machine learning‐based screening algorithm based on the SB‐C can detect probable MCI patients in representative population sample of older people using a speech biomarker read‐out.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.