The primary purposes of this investigation were to (a) continue a line of research examining the psychometric defensibility of the Social, Academic, and Emotional Behavior Risk Screener - Teacher Rating Scale (SAEBRS-TRS), and (b) develop and preliminarily evaluate the diagnostic accuracy of a novel multiple gating procedure based on teacher nomination and the SAEBRS-TRS. Two studies were conducted with elementary and middle school student samples across two separate geographic locations. Study 1 (n=864 students) results supported SAEBRS-TRS defensibility, revealing acceptable to optimal levels of internal consistency reliability, concurrent validity, and diagnostic accuracy. Findings were promising for a combined multiple gating procedure, which demonstrated acceptable levels of sensitivity and specificity. Study 2 (n=1534 students), which replicated Study 1 procedures, further supported the SAEBRS-TRS' psychometric defensibility in terms of reliability, validity, and diagnostic accuracy. Despite the incorporation of revisions intended to promote sensitivity levels, the combined multiple gating procedure's diagnostic accuracy was similar to that found in Study 1. Taken together, results build upon prior research in support of the applied use of the SAEBRS-TRS, as well as justify future research regarding a SAEBRS-based multiple gating procedure. Implications for practice and study limitations are discussed.
The purpose of this investigation was to evaluate the models for interpretation and use that serve as the foundation of an interpretation/use argument for the Social and Academic Behavior Risk Screener (SABRS). The SABRS was completed by 34 teachers with regard to 488 students in a Midwestern high school during the winter portion of the academic year. Confirmatory factor analysis supported interpretation of SABRS data, suggesting the fit of a bifactor model specifying 1 broad factor (General Behavior) and 2 narrow factors (Social Behavior [SB] and Academic Behavior [AB]). The interpretive model was further supported by analyses indicative of the internal consistency and interrater reliability of scores from each factor. In addition, latent profile analyses indicated the adequate fit of the proposed 4-profile SABRS model for use. When cross-referenced with SABRS cut scores identified via previous work, results revealed students could be categorized as (a) not at-risk on both SB and AB, (b) at-risk on SB but not on AB, (c) at-risk on AB but not on SB, or (d) at-risk on both SB and AB. Taken together, results contribute to growing evidence supporting the SABRS within universal screening. Limitations, implications for practice, and future directions for research are discussed herein.
The purpose of this study was to evaluate the psychometric defensibility of the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS): a quick and easy universal screener for behavioral and emotional risk. Elementary school teachers completed the SAEBRS with 346 students in Grades 3 to 5. Teachers also completed two criterion measures, including the Student Risk Screening Scale (SRSS) and the Student Internalizing Behavior Screener (SIBS). Additional extant behavioral and academic data sources were collected including office discipline referrals, suspensions, curriculum-based measurement scores, and statewide achievement test scores. Reliability analyses were indicative of the internal consistency of all four SAEBRS scales, whereas correlational analyses and Mann–Whitney–Wilcoxon tests supported the criterion-related and construct validity. Receiver operating characteristic curve analyses suggested each SAEBRS scale was associated with acceptable or optimal diagnostic accuracy. However, cut scores selected as most appropriate within each SAEBRS scale were found to differ from those identified in previous studies, potentially suggesting the influence of criterion outcome under consideration on SAEBRS diagnostic accuracy. Limitations and future directions for research are discussed, with emphasis on the need for continued examination of the extent of variability in SAEBRS cut score performance.
The present study explores the convergent and predictive validity for several widely used measures of teaching quality from the Measures of Effective Teaching Project (Bill and Melinda Gates Foundation, 2009-2011). Specifically, the Classroom Assessment Scoring System (CLASS; Pianta, Hamre, & Mintz, 2012), the Framework for Teaching (FFT; Danielson Group, 2013), and the Tripod Student Perceptions Scale (Tripod; Ferguson, 2008) were examined. Correlations among measures were assessed by developmental level and content area (elementary mathematics N = 70; elementary English language arts N = 101; middle school mathematics N = 291, middle school English language arts N = 280). Both average scores and score variability (i.e., coefficient of variation) for the CLASS, FFT, and Tripod were used to predict value-added models (VAM), a high-stakes measure of students' academic growth. For elementary mathematics and ELA, findings indicated the CLASS and FFT exhibited moderate convergent validity while divergent validity was found between the Tripod and the CLASS and FFT. Across content areas in middle school grades, the CLASS, FFT, and Tripod exhibited moderate to high-moderate convergent validity. Average student and observer scores were positively related to VAM scores, whereas variability in scores demonstrated negative relations to VAM scores. Implications of findings for teacher evaluation and professional development are discussed. For decades, practitioners, researchers, and policy makers have endeavored to generate measures that capture "effective teaching" (Stronge, Ward, & Grant, 2011). Teachers have the potential to play a pivotal role in the academic and social-emotional development of their students (Pianta, 1999), yet education research indicates there is considerable variation in the quality of instruction students receive within and across classrooms (Chetty, Friedman, & Rockoff, 2011; Cohen & Goldhaber, 2016; Cohen, Ruzek, & Sandilos, 2018). In the United States, evaluation of effective teaching has propelled forward with the adoption of federal policies that provide incentives based on teacher qualifications and student achievement, such as the Teacher Incentive Fund (Heyburn, Lewis, & Ritter, 2010) and Race to the Top (U.S. Department of Education, 2009). More recently, the Every Student Succeeds Act (ESSA, 2015) stipulated that teacher evaluation systems should include multiple measures of teacher effectiveness that inform instructional planning and professional growth opportunities, underscoring the value of accumulating convergent and predictive psychometric evidence for various measures of instructional quality. Despite increased national attention, consensus about how to best measure teacher effectiveness has yet to be established
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.