Can non-clinicians spot preschoolers likely to have autism spectrum disorder by observing their everyday peer interaction? We set out to develop a screening tool that capitalizes on peer interaction as a naturalistic “stress test” to identify children more likely than their peers to have autism spectrum disorder. A total of 304 3- to 4-year-olds were observed at school with an 84-item preliminary checklist; data-driven item reduction yielded a 13-item Classroom Observation Scale. The Classroom Observation Scale scores correlated significantly with Autism Diagnostic Observation Schedule–2 scores. To validate the scale, another 322 2- to 4-year-olds were screened using the Classroom Observation Scale. The screen-positive children and randomly selected typically developing peers were assessed for autism spectrum disorder 1.5 years later. The Classroom Observation Scale as used by teachers and researchers near preschool onset predicted autism spectrum disorder diagnoses 1.5 years later (odds ratios = 14.6 and 6.7, respectively). This user-friendly 13-item Classroom Observation Scale enables teachers and healthcare workers with little or no clinical training to identify, with reliable and valid results, preschoolers more likely than their peers to have autism spectrum disorder. Lay abstract With professional training and regular opportunities to observe children interacting with their peers, preschool teachers are in a good position to notice children’s autism spectrum disorder symptomatology. Yet even when a preschool teacher suspects that a child may have autism spectrum disorder, fear of false alarm may hold the teacher back from alerting the parents, let alone suggesting them to consider clinical assessment for the child. A valid and convenient screening tool can help preschool teachers make more informed and hence more confident judgment. We set out to develop a screening tool that capitalizes on peer interaction as a naturalistic “stress test” to identify children more likely than their peers to have autism spectrum disorder. A total of 304 3- to 4-year-olds were observed at school with an 84-item preliminary checklist; data-driven item reduction yielded a 13-item Classroom Observation Scale. The Classroom Observation Scale scores correlated significantly with Autism Diagnostic Observation Schedule–2 scores. To validate the scale, another 322 2- to 4-year-olds were screened using the Classroom Observation Scale. The screen-positive children and randomly selected typically developing peers were assessed for autism spectrum disorder 1.5 years later. The Classroom Observation Scale as used by teachers and researchers near preschool onset predicted autism spectrum disorder diagnoses 1.5 years later. This user-friendly 13-item Classroom Observation Scale enables teachers and healthcare workers with little or no clinical training to identify, with reliable and valid results, preschoolers more likely than their peers to have autism spectrum disorder.
In psychological science, there is an increasing concern regarding the reproducibility of scientific findings. For instance, Replication Project: Psychology (Open Science Collaboration, 2015) found that the proportion of successful replication in psychology was 41%. This proportion was calculated based on Cumming and Maillardet (2006) widely employed capture procedure (CPro) and capture percentage (CPer). Despite the popularity of CPro and CPer, we believe that using them may lead to an incorrect conclusion of (a) successful replication when the population effect sizes in the original and replicated studies are different; and (b) unsuccessful replication when the population effect sizes in the original and replicated studies are identical but their sample sizes are different. Our simulation results show that the performances of CPro and CPer become biased, such that researchers can easily make a wrong conclusion of successful/unsuccessful replication. Implications of these findings are considered in the conclusion.
A common research question in psychology entails examining whether significant group differences (e.g. male and female) can be found in a list of numeric variables that measure the same underlying construct (e.g. intelligence). Researchers often use a multivariate analysis of variance (MANOVA), which is based on conventional null-hypothesis significance testing (NHST). Recently, a number of quantitative researchers have suggested reporting an effect size measure (ES) in this research scenario because of the perceived shortcomings of NHST. Thus, a number of MANOVA ESs have been proposed (e.g. generalized eta squared [Formula: see text], generalized omega squared [Formula: see text]), but they rely on two key assumptions—multivariate normality and homogeneity of covariance matrices—which are frequently violated in psychological research. To solve this problem we propose a non-parametric (or assumptions-free) ES ( Aw) for MANOVA. The new ES is developed on the basis of the non-parametric A in ANOVA. To test Aw we conducted a Monte-Carlo simulation. The results showed that Aw was accurate (robust) across different manipulated conditions—including non-normal distributions, unequal covariance matrices between groups, total sample sizes, sample size ratios, true ES values, and numbers of dependent variables—thereby providing empirical evidence supporting the use of Aw, particularly when key assumptions are violated. Implications of the proposed Aw for psychological research and other disciplines are also discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.