The Arnett Caregiver Interaction Scale (CIS) has been widely used in research studies to measure the quality of caregiver–child interactions. The scale was modeled on a well-established theory of parenting, but there are few psychometric studies of its validity. We applied factor analyses and item response theory methods to assess the psychometric properties of the Arnett CIS in a national sample of toddlers in home-based care and preschoolers in center-based care from the Early Childhood Longitudinal Study-Birth Cohort. We found that a bifactor structure (one common factor and a second set of specific factors) best fits the data. In the Arnett CIS, the bifactor model distinguishes a common substantive dimension from two methodological dimensions (for positively and negatively oriented items). Despite the good fit of this model, the items are skewed (most teachers/caregivers display positive interactions with children) and, as a result, the Arnett CIS is not well suited to distinguish between caregivers who are “highly” versus “moderately” positive in their interactions with children, according to the items on the scale. Regression-adjusted associations between the Arnett CIS and child outcomes are small, especially for preschoolers in centers. We encourage future scale development work on measures of child care quality by early childhood scholars.
This paper highlights the current findings and issues regarding the role of computer-adaptive testing in test anxiety. The computer-adaptive test (CAT) proposed by one of the Common Core consortia brings these issues to the forefront. Research has long indicated that test anxiety impairs student performance. More recent research indicates that taking a test in a CAT format can affect the ability estimates of students with test anxiety. Inaccurate measures of ability are disconcerting because of the threat they pose to the validity of test score interpretation. This paper raises concerns regarding how the implementation of a computer-adaptive test for a large-scale common core assessment system could differentially affect students with test anxiety. Issues of fairness and score comparability are raised, and the implications of these issues are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.