How prevalent is dyslexia? A definitive answer to this question has been elusive because of the continuous distribution of reading performance and predictors of dyslexia and because of the heterogeneous nature of samples of poor readers. Samples of poor readers are a mixture of individuals whose reading is consistent with or expected based on their performance in other academic areas and in language, and individuals with dyslexia whose reading is not consistent with or expected based on their other performances. In the present article, we replicate and extend a new approach for determining the prevalence of dyslexia. Using model-based meta-analysis and simulation, three main results were found. First, the prevalence of dyslexia is better represented as a distribution that varies as a function of severity as opposed to any single-point estimate. Second, samples of poor readers will contain more expected poor readers than unexpected or dyslexic readers. Third, individuals with dyslexia can be found across the reading spectrum as opposed to only at the lower tail of reading performance. These results have implications for screening and identification, and for recruiting participants for scientific studies of dyslexia.
The accuracy of certain internal consistency estimators have been questioned in recent years. The present study tests the accuracy of six reliability estimators (Cronbach’s alpha, omega, omega hierarchical, Revelle’s omega, and greatest lower bound) in 140 simulated conditions of unidimensional continuous data with uncorrelated errors with varying sample sizes, number of items, population reliabilities, and factor loadings. Estimators that have been proposed to replace alpha were compared with the performance of alpha as well as to each other. Estimates of reliability were shown to be affected by sample size, degree of violation of tau equivalence, population reliability, and number of items in a scale. Under the conditions simulated here, estimates quantified by alpha and omega yielded the most accurate reflection of population reliability values. A follow-up regression comparing alpha and omega revealed alpha to be more sensitive to degree of violation of tau equivalence, whereas omega was affected greater by sample size and number of items, especially when population reliability was low.
Despite decades of research, it has been difficult to achieve consensus on a definition of common learning disabilities such as dyslexia. This lack of consensus represents a fundamental problem for the field. Our approach to addressing this issue is to use model-based meta-analyses and Bayesian models with informative priors to combine the results of a large number of studies for the purpose of yielding a more stable and well-supported conceptualization of reading disability. A prerequisite to implementing these models is establishing informative priors for dyslexia. We illustrate a new approach for doing so based on the known distribution of the difference between correlated variables, and use this distribution to determine the proportion of poor readers whose poor reading is unexpected (i.e., likely to be due to dyslexia) as opposed to expected.
Unlike traditional media, social media systems often present information of different types from different kinds of contributors within a single message pane, a juxtaposition of potential influences that challenges traditional health communication processing. One type of social media system, question-and-answer advice systems, provides peers' answers to health-related questions, which yet other peers read and rate. Responses may appear good or bad, responders may claim expertise, and others' aggregated evaluations of an answer's usefulness may affect readers' judgments. An experiment explored how answer feasibility, expertise claims, and user-generated ratings affected readers' assessments of advice about anonymous HIV testing. Results extend the heuristic-systematic model of persuasion (Chaiken, 1980) and warranting theory (Walther & Parks, 2002). Information that is generally associated with both systematic and heuristic processes influenced readers' evaluations. Moreover, content-level cues affected judgments about message sources unexpectedly. When conflicting cues were present, cues with greater warranting value (consensus user-generated ratings) had greater influence on outcomes than less warranted cues (self-promoted expertise). Findings present a challenge to health professionals' concerns about the reliability of online health information systems.
Set for variability (SfV) is an oral language task that requires an individual to disambiguate the mismatch between the decoded form of an irregular word and its actual lexical pronunciation. For example, in the task, the word wasp is pronounced to rhyme with clasp (i.e. /waesp/), and the individual must recognize the actual pronunciation of the word to be /wɒsp/. SfV has been shown to be a significant predictor of both item-specific and general word reading variance above and beyond that associated with phonemic awareness skill, letter-sound knowledge, and vocabulary skill. However, very little is known about the child characteristics and word features that affect SfV item performance. In this study, we explored whether word features and child characteristics that involve phonology only are adequate to explain item-level variance in SfV performance or whether including predictors that involve the connection between phonology and orthography explains additional variance. To accomplish this, we administered the SfV task (N = 75 items) to a sample of grade 2-5 children (N = 489), along with a battery of reading, reading related, and language measures. Results suggest that variance in SfV performance is uniquely accounted for by measures tapping phonological skill along with those capturing knowledge of phonology to orthography associations, but more so in children with better decoding skill. Additionally, word reading skill was found to moderate the influence of other predictors suggesting that how the task is approached may be impacted by word reading and decoding ability. Educational Impact and Implications StatementSet for variability (SfV) is a powerful predictor of word recognition skill in developing readers. The measure taps children's ability to go from the decoded form of a word (e.g., /wˆz/for was) to the correct form (e.g., /wɒz/ for was), which is considered an important second step in word decoding. In the current study, we worked to determine what factors lead to variability in children's ability to perform the task. We found that performance on the SfV task was highly correlated with children's phonemic awareness skill and also related to their reading and decoding skill. This suggests that children with advanced reading and decoding skill may be using both phonological and spelling skills to go from the decoded form of a word to the correct pronunciation. The findings suggest that further studies evaluating the causal influence of SfV on reading development are warranted.
Can genetic screening be used to personalize education for students? Genome-wide association studies (GWAS) screen an individual’s DNA for specific variations in their genome, and how said variations relate to specific traits. The variations can then be assigned a corresponding weight and summed to produce polygenic scores (PGS) for given traits. Though first developed for disease risk, PGS is now used to predict educational achievement. Using a novel simulation method, this paper examines if PGS could advance screening in schools, a goal of personalized education. Results show limited potential benefits for using PGS to personalize education for individual students. However, further analysis shows PGS can be effectively used alongside progress monitoring measures to screen for learning disability risk. Altogether, PGS is not useful in personalizing education for every child but has potential utility when used simultaneously with additional screening tools to help determine which children may struggle academically.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.