Background: Difficulties with facial expression processing may be associated with the characteristic social impairments in individuals with autism spectrum disorder (ASD). Emotional face processing in ASD has been investigated in an abundance of behavioral and EEG studies, yielding, however, mixed and inconsistent results. Methods: We combined fast periodic visual stimulation (FPVS) with EEG to assess the neural sensitivity to implicitly detect briefly presented facial expressions among a stream of neutral faces, in 23 boys with ASD and 23 matched typically developing (TD) boys. Neutral faces with different identities were presented at 6 Hz, periodically interleaved with an expressive face (angry, fearful, happy, sad in separate sequences) every fifth image (i.e., 1.2 Hz oddball frequency). These distinguishable frequency tags for neutral and expressive stimuli allowed direct and objective quantification of the expression-categorization responses, needing only four sequences of 60 s of recording per condition. Results: Both groups show equal neural synchronization to the general face stimulation and similar neural responses to happy and sad faces. However, the ASD group displays significantly reduced responses to angry and fearful faces, compared to TD boys. At the individual subject level, these neural responses allow to predict membership of the ASD group with an accuracy of 87%. Whereas TD participants show a significantly lower sensitivity to sad faces than to the other expressions, ASD participants show an equally low sensitivity to all the expressions. Conclusions: Our results indicate an emotion-specific processing deficit, instead of a general emotionprocessing problem: Boys with ASD are less sensitive than TD boys to rapidly and implicitly detect angry and fearful faces. The implicit, fast, and straightforward nature of FPVS-EEG opens new perspectives for clinical diagnosis.
Objectives: Palatal shape contains a lot of information that is of clinical interest. Moreover, palatal shape analysis can be used to guide or evaluate orthodontic treatments. A statistical shape model (SSM) is a tool that, by means of dimensionality reduction, aims at compactly modeling the variance of complex shapes for efficient analysis. In this report, we evaluate several competing approaches to constructing SSMs for the human palate. Setting and Sample Population:This study used a sample comprising digitized 3D maxillary dental casts from 1,324 individuals. Materials and methods: Principal component analysis (PCA) and autoencoders (AE)are popular approaches to construct SSMs. PCA is a dimension reduction technique that provides a compact description of shapes by uncorrelated variables. AEs are situated in the field of deep learning and provide a non-linear framework for dimension reduction. This work introduces the singular autoencoder (SAE), a hybrid approach that combines the most important properties of PCA and AEs. We assess the performance of the SAE using standard evaluation tools for SSMs, including accuracy, generalization, and specificity.
Difficulties in automatic emotion processing in individuals with autism spectrum disorder (ASD) might remain concealed in behavioral studies due to compensatory strategies. To gain more insight in the mechanisms underlying facial emotion recognition, we recorded eye tracking and facial mimicry data of 20 school-aged boys with ASD and 20 matched typically developing controls while performing an explicit emotion recognition task. Proportional looking times to specific face regions (eyes, nose, and mouth) and face exploration dynamics were analyzed. In addition, facial mimicry was assessed. Boys with ASD and controls were equally capable to recognize expressions and did not differ in proportional looking times, and number and duration of fixations. Yet, specific facial expressions elicited particular gaze patterns, especially within the control group. Both groups showed similar face scanning dynamics, although boys with ASD demonstrated smaller saccadic amplitudes. Regarding the facial mimicry, we found no emotion specific facial responses and no group differences in the responses to the displayed facial expressions. Our results indicate that boys with and without ASD employ similar eye gaze strategies to recognize facial expressions. Smaller saccadic amplitudes in boys with ASD might indicate a less exploratory face processing strategy. Yet, this slightly more persistent visual scanning behavior in boys with ASD does not imply less efficient emotion information processing, given the similar behavioral performance. Results on the facial mimicry data indicate similar facial responses to emotional faces in boys with and without ASD. Lay Summary: We investigated (i) whether boys with and without autism apply different face exploration strategies when recognizing facial expressions and (ii) whether they mimic the displayed facial expression to a similar extent. We found that boys with and without ASD recognize facial expressions equally well, and that both groups show similar facial reactions to the displayed facial emotions. Yet, boys with ASD visually explored the faces slightly less than the boys without ASD.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.