The Remote Associates Test (RAT) is often assumed to be a measure of creativity; however, the RAT has been broadly applied in psychological studies. Originally developed to assess individual differences in associative processing, the RAT has been used to study various constructs, such as creativity, problem solving, insight, and memory. Aside from early validation studies, the psychometric properties of the RAT remain largely unexplored. This study examines the internal and external structure validity evidence of a computer-based, 30-item RAT based on scores from a sample of undergraduate students. We examined internal structure via classical test theory item statistics, dimensionality analysis, item response theory analysis, and differential item functioning analysis. Results showed that the twoparameter logistic (2PL) model, in which items have unique discrimination and difficulty parameters, had good fit to item responses from our 30-item RAT. In addition, the relationships among scores on the RAT and a series of other cognitive measures including divergent thinking, intelligence, and working memory tasks were examined to assess the external validity of the RAT scores. Results indicate that the RAT assesses cognitive processes similar to those from a wide range of other analytical and convergent thinking test, distinguishing it from traditional, divergent thinking tests of creativity. In light of concerns regarding the internal and external psychometric properties of creativity measures, our findings help to clarify the item and test characteristics of the RAT.
This study describes the development and validation of the Extract the Base test (ETB), which assesses derivational morphological awareness. Scores on this test were validated for 580 monolingual students and 373 Spanish-speaking English language learners (ELLs) in third through fifth grade. As part of the validation of the internal structure, which involved using the Generalized Partial Credit Model for tests with polytomous items, items on this test were shown to provide information about students of different abilities and also discriminate amongst such heterogeneous students. As part of the validation of the test’s relationship to criterion, items were shown to correlate with measures of word identification, reading comprehension, and vocabulary measures. Differences in performances for fluent English students and ELLs, students of varied home language environments, and different grade levels were noted. Additionally, the task was validated using a dichotomous scoring system to provide reliability and validity information using this alternate scoring method.
As part of a 5-year professional development intervention aimed at improving science and literacy achievement of English language learning (ELL) students in urban elementary schools, this study examined fourth-grade students' science achievement across a 3-year (2005)(2006)(2007)(2008) implementation of our professional development intervention consisting of curriculum units and teacher workshops. Analyses were conducted with 1,758 students at six schools that participated in the intervention. The results of this study reveal several central findings. First, students in the treatment group displayed a statistically significant increase in science achievement from pre-to posttest. Second, there was
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.