Clinical research in autism has recently witnessed promising digital phenotyping results, mainly focused on single feature extraction, such as gaze, head turn on name-calling or visual tracking of the moving object. The main drawback of these studies is the focus on relatively isolated behaviors elicited by largely controlled prompts. We recognize that while the diagnosis process understands the indexing of the specific behaviors, ASD also comes with broad impairments that often transcend single behavioral acts. For instance, the atypical nonverbal behaviors manifest through global patterns of atypical postures and movements, fewer gestures used and often decoupled from visual contact, facial affect, speech. Here, we tested the hypothesis that a deep neural network trained on the non-verbal aspects of social interaction can effectively differentiate between children with ASD and their typically developing peers. Our model achieves an accuracy of 80.9% (F1 score: 0.818; precision: 0.784; recall: 0.854) with the prediction probability positively correlated to the overall level of symptoms of autism in social affect and repetitive and restricted behaviors domain. Provided the non-invasive and affordable nature of computer vision, our approach carries reasonable promises that a reliable machine-learning-based ASD screening may become a reality not too far in the future.
Public health intervention techniques have been highly significant in reducing the negative impact of several epidemics and pandemics. Among all of the wide-spread diseases, one of the most dangerous one has been severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) or Coronavirus disease 2019 (COVID-19). The impact of the virus has been observed in over 200 countries leading to hospitalizations and deaths of millions of people. Currently existing COVID-19 risk estimation tools provided to the general public have been highly variable during the pandemic due to its dependency on rapidly evolving factors such as community transmission levels and variants. There has also been confusion surrounding certain personal protective strategies such as risk reduction by mask-wearing and vaccination. In order to create a simplified easy-to-use tool for estimating different individual risks associated with carrying out daily-life activity, we developed COVID-19 Activity Risk Calculator (CovARC). CovARC serves as a gamified public health intervention as users can "play with" how different risks associated with COVID-19 would change depending on several different factors when carrying out a daily routine activity. Empowering the public to make informed, data-driven decisions about safely engaging in activities may help to reduce COVID- 19 levels in the community. In this study, we demonstrate a streamlined, scalable and accurate COVID-19 risk calculation system. Our study also showcases quantitatively, the increased impact of interventions such as vaccination and mask-wearing when cases are higher, which could prove as a validity to inform and support policy decisions around mask mandate case thresholds and other non-pharmaceutical interventions.
Clinical research in autism has recently witnessed promising digital phenotyping results, mainly focused on single feature extraction, such as gaze, head turn on name-calling or visual tracking of the moving object. The main drawback of these studies is the focus on relatively isolated behaviors elicited by largely controlled prompts. We recognize that while the diagnosis process understands the indexing of the specific behaviors, ASD also comes with broad impairments that often transcend single behavioral acts. For instance, the atypical nonverbal behaviors manifest through global patterns of atypical postures and movements, fewer gestures used and often decoupled from visual contact, facial affect, speech. Here, we tested the hypothesis that a deep neural network trained on the non-verbal aspects of social interaction can effectively differentiate between children with ASD and their typically developing peers. Our model achieves an accuracy of 80.9% (F1 score: 0.818; precision: 0.784; recall: 0.854) with the prediction probability positively correlated to the overall level of symptoms of autism in social affect and repetitive and restricted behaviors domain. Provided the non-invasive and affordable nature of computer vision, our approach carries reasonable promises that a reliable machine-learning-based ASD screening may become a reality not too far in the future.
Background Imitation behaviors develop very early and increase in frequency and complexity during childhood. Most studies in children with autism spectrum disorder (ASD) support a general decrement in imitation performance. To better understand this phenomenon in ASD, factors related to visual attention and motor execution have been proposed. However, these studies used various paradigms and explored different types of imitation in heterogeneous samples, leading to inconsistent findings. The present study examines imitation performance related to visual attention and motor execution. We focused on gesture imitation, consistently reported as more affected than imitation of actions with objects in ASD. We also investigated the influence of meaningful and meaningless gestures on imitation performance. Methods Our imitation eye-tracking task used a video of an actor who demonstrated gestures and prompted children to imitate them. The demonstrations comprised three types of gestures: meaningful (MF) and meaningless (ML) hand gestures, and meaningless facial gestures. We measured the total fixation duration to the actor’ face during child-directed speech and gesture demonstrations. During the eye-tracking task, we video-recorded children to later assess their imitation performance. Our sample comprised 100 participants, among which were 84 children with ASD (aged 3.55 ± 1.11 years). Results Our results showed that the ASD and typically developing (TD) groups globally displayed the same visual attention toward the face (during child-directed speech) and toward gesture demonstrations, although children with ASD spent less time fixating on the face during FAC stimuli. Visual exploration of actors’ faces and gestures did not influence imitation performance. Rather, imitation performance was positively correlated with chronological and developmental age. Moreover, imitation of MF gestures was associated with less severe autistic symptoms, whereas imitation of ML gestures was positively correlated with higher non-verbal cognitive skills and fine motor skills. Conclusions These findings contribute to a better understanding of the complexity of imitation. We delineated the distinct nature of imitation of MF and ML hand gestures in children with ASD. We discuss clinical implications in relation to assessment and intervention programs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.