Primary progressive aphasia is a clinical syndrome defined by progressive deficits isolated to speech and/or language, and can be classified into non-fluent, semantic and logopenic variants based on motor speech, linguistic and cognitive features. The connected speech of patients with primary progressive aphasia has often been dichotomized simply as 'fluent' or 'non-fluent', however fluency is a multidimensional construct that encompasses features such as speech rate, phrase length, articulatory agility and syntactic structure, which are not always impacted in parallel. In this study, our first objective was to improve the characterization of connected speech production in each variant of primary progressive aphasia, by quantifying speech output along a number of motor speech and linguistic dimensions simultaneously. Secondly, we aimed to determine the neuroanatomical correlates of changes along these different dimensions. We recorded, transcribed and analysed speech samples for 50 patients with primary progressive aphasia, along with neurodegenerative and normal control groups. Patients were scanned with magnetic resonance imaging, and voxel-based morphometry was used to identify regions where atrophy correlated significantly with motor speech and linguistic features. Speech samples in patients with the non-fluent variant were characterized by slow rate, distortions, syntactic errors and reduced complexity. In contrast, patients with the semantic variant exhibited normal rate and very few speech or syntactic errors, but showed increased proportions of closed class words, pronouns and verbs, and higher frequency nouns, reflecting lexical retrieval deficits. In patients with the logopenic variant, speech rate (a common proxy for fluency) was intermediate between the other two variants, but distortions and syntactic errors were less common than in the non-fluent variant, while lexical access was less impaired than in the semantic variant. Reduced speech rate was linked with atrophy to a wide range of both anterior and posterior language regions, but specific deficits had more circumscribed anatomical correlates. Frontal regions were associated with motor speech and syntactic processes, anterior and inferior temporal regions with lexical retrieval, and posterior temporal regions with phonological errors and several other types of disruptions to fluency. These findings demonstrate that a multidimensional quantification of connected speech production is necessary to characterize the differences between the speech patterns of each primary progressive aphasic variant adequately, and to reveal associations between particular aspects of connected speech and specific components of the neural network for speech production.
Neural network models of attention can provide a unifying approach to the study of human cognitive and emotional development (Posner & Rothbart, 2007). This paper we argue that a neural networks approach to the infant development of joint attention can inform our understanding of the nature of human social learning, symbolic thought process and social cognition. At its most basic, joint attention involves the capacity to coordinate one’s own visual attention with that of another person. We propose that joint attention development involves increments in the capacity to engage in simultaneous or parallel processing of information about one’s own attention and the attention of other people. Infant practice with joint attention is both a consequence and organizer of the development of a distributed and integrated brain network involving frontal and parietal cortical systems. This executive distributed network first serves to regulate the capacity of infants to respond to and direct the overt behavior of other people in order to share experience with others through the social coordination of visual attention. In this paper we describe this parallel and distributed neural network model of joint attention development and discuss two hypotheses that stem from this model. One is that activation of this distributed network during coordinated attention enhances to depth of information processing and encoding beginning in the first year of life. We also propose that with development joint attention becomes internalized as the capacity to socially coordinate mental attention to internal representations. As this occurs the executive joint attention network makes vital contributions to the development of human symbolic thinking and social cognition.
This pilot study evaluates the ability of machined learned algorithms to assist with the differential diagnosis of dementia subtypes based on brief (< 10 min) spontaneous speech samples. We analyzed 1 recordings of a brief spontaneous speech sample from 48 participants from 5 different groups: 4 types of dementia plus healthy controls. Recordings were analyzed using a speech recognition system optimized for speakerindependent spontaneous speech. Lexical and acoustic features were automatically extracted. The resulting feature profiles were used as input to a machine learning system that was trained to identify the diagnosis assigned to each research participant. Between groups lexical and acoustic differences features were detected in accordance with expectations from prior research literature suggesting that classifications were based on features consistent with human-observed symptomatology. Machine learning algorithms were able to identify participants' diagnostic group with accuracy comparable to existing diagnostic methods in use today. Results suggest this clinical speech analytic approach offers promise as an additional, objective and easily obtained source of diagnostic information for clinicians.
Theory suggests that information processing during joint attention may be atypical in children with Autism Spectrum Disorder (ASD). This hypothesis was tested in a study of school-aged children with higher functioning ASD and groups of children with symptoms of ADHD or typical development. The results indicated that the control groups displayed significantly better recognition memory for pictures studied in an initiating joint attention (IJA) rather than responding to joint attention (RJA) condition. This effect was not evident in the ASD group. The ASD group also recognized fewer pictures from the IJA condition than controls, but not the RJA condition. Atypical information processing may be a marker of the continued effects of joint attention disturbance in school aged children with ASD.
T im e-b a sed p ro sp ective m em o r y, t h e ab ility t o ca r r y o u t a fu tu r e in t en tio n a t a sp eci® ed t im e, wa s fo u n d t o b e im p a ired in a co m m u n ity sa m p le o f clinica lly d ep ressed a d u lts, rela t ive to a n o n d ep ressed sam p le. N o n d ep ressed p ar t icipa n t s m o n ito red t h e t im e m o r e freq u en t ly an d , in th e ® n a l b lo ck o f t h e ta sk , a ccelerat ed t im e-m o n ito r ing a s th e t a rget t im e fo r th e p ro sp ect ive m em o ry r esp o n se a p p ro a ch ed . T h ese r esu lts a re co n sisten t wit h p r evio u s ® n d in gs o f d ep ression -rela ted im p a irm en t s in ret ro sp ect ive m em o r y t a sk s th a t req u ire co n t ro lled, self-initiat ed p rocessing.
Background Impairments in social attention play a major role in autism, but little is known about their role in development after preschool. In this study a public speaking task was used to study social attention, its moderators, and its association with classroom learning in elementary and secondary students with higher functioning Autism Spectrum Disorder (HFASD). Method Thirty-seven students with HFASD and 54 age and IQ-matched peers without symptoms of ASD were assessed in a virtual classroom public speaking paradigm. This paradigm assessed the ability to attend to 9 avatar peers seated at a table, while simultaneously answering self-referenced questions. Results Students with HFASD looked less frequently to avatar peers in the classroom while talking. However, social attention was moderated in the HFASD sample such that students with lower IQ, and/or more symptoms of social anxiety, and/or more ADHD Inattentive symptoms, displayed more atypical social attention. Group differences were more pronounced when the classroom contained social avatars versus non-social targets. Moreover, measures of social attention rather than non-social attention were significantly associated with parent report and objective measures of learning in the classroom. Conclusions The data in this study supports the hypothesis of the Social Attention Model of ASD that social attention disturbance remains part of the school-aged phenotype of autism that is related to syndrome specific problems in social learning. More research of this kind would likely contribute to advances in the understanding of the development of autism and educational intervention approaches for affected school-aged children.
A new virtual reality task was employed which uses preference for interpersonal distance to social stimuli to examine social motivation and emotion perception in children with Autism Spectrum Disorders. Nineteen high function children with higher functioning Autism Spectrum Disorder (HFASD) and 23 age, gender, and IQ matched children with typical development (TD) used a joy stick to position themselves closer or further from virtual avatars while attempting to identify six emotions expressed by the avatars, happiness, fear, anger, disgust, sadness, and surprise that were expressed at different levels of intensity. The results indicated that children with HFASD displayed significantly less approach behavior to the positive happy expression than did children with TD, who displayed increases in approach behavior to higher intensities of happy expressions. Alternatively, all groups tended to withdraw from negative emotions to the same extent and there were no diagnostic group differences in accuracy of recognition of any of the six emotions. This pattern of results is consistent with theory that suggests that some children with HFASD display atypical social-approach motivation, or sensitivity to the positive reward value of positive social–emotional events. Conversely, there was little evidence that a tendency to withdraw from social–emotional stimuli, or a failure to process social emotional stimuli, was a component of social behavior task performance in this sample of children with HFASD.
We describe results that show the effectiveness of machine learning in the automatic diagnosis of certain neurodegenerative diseases, several of which alter speech and language production. We analyzed audio from 9 control subjects and 30 patients diagnosed with one of three subtypes of Frontotemporal Lobar Degeneration. From this data, we extracted features of the audio signal and the words the patient used, which were obtained using our automated transcription technologies. We then automatically learned models that predict the diagnosis of the patient using these features. Our results show that learned models over these features predict diagnosis with accuracy significantly better than random. Future studies using higher quality recordings will likely improve these results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.