Many STEM degrees require passing an introductory physics course. Physics courses often have high failure rates that may disproportionately harm students who are marginalized by racism, sexism, and classism. We examined the associations between Learning Assistant (LA) supported courses and equity in non-passing grades (i.e., d, drop, fail, or withdrawal; DFW) in introductory physics courses. The data used in the study came from 2312 students in 41 sections of introductory physics courses at a regional Hispanic serving institution. We developed hierarchical generalized linear models of student DFW rates that accounted for gender, race, first-generation status and LA-supported instruction. We used a quantitative critical race theory (QuantCrit) perspective focused on the role of hegemonic power structures in perpetuating inequitable student outcomes. Our QuantCrit perspective informed our research questions, methods, and interpretations of findings. The models associated LAs with overall decreases in DFW rates and larger decreases in DFW rates for students of color than their white peers. While the inequities in DFW rates were lower in LA-supported courses, they were still present.
We investigated the intersectional nature of race/racism and gender/sexism in broad scale inequities in physics student learning using a critical quantitative intersectionality. To provide transparency and create a nuanced picture of learning, we problematized the measurement of equity by using two competing operationalizations of equity: Equity of Individuality and Equality of Learning. These two models led to conflicting conclusions. The analyses used hierarchical linear models to examine student's conceptual learning as measured by gains in scores on research-based assessments administered as pretests and posttests. The data came from the Learning About STEM Student Outcomes' (LASSO) national database and included data from 13,857 students in 187 firstsemester college physics courses. Findings showed differences in student gains across gender and race. Large gender differences existed for White and Hispanic students but not for Asian, Black, and Pacific Islander students. The models predicted larger gains for students in collaborative learning than in lecture-based courses. The Equity of Individuality operationalization indicated that collaborative instruction improved equity because all groups learned more with collaborative learning. The Equality of Learning operationalization indicated that collaborative instruction did not improve equity because differences between groups were unaffected. We discuss the implications of these mixed findings and identify
Physics education researchers (PER) commonly use complete-case analysis to address missing data. For complete-case analysis, researchers discard all data from any student who is missing any data. Despite its frequent use, no PER article we reviewed that used complete-case analysis provided evidence that the data met the assumption of missing completely at random (MCAR) necessary to ensure accurate results. Not meeting this assumption raises the possibility that prior studies have reported biased results with inflated gains that may obscure differences across courses. To test this possibility, we compared the accuracy of complete-case analysis and multiple imputation (MI) using simulated data. We simulated the data based on prior studies such that students who earned higher grades participated at higher rates, which made the data missing at random (MAR). PER studies seldom use MI, but MI uses all available data, has less stringent assumptions, and is more accurate and more statistically powerful than complete-case analysis. Results indicated that complete-case analysis introduced more bias than MI and this bias was large enough to obscure differences between student populations or between courses. We recommend that the PER community adopt the use of MI for handling missing data to improve the accuracy in research studies.
Measuring student learning is a complicated but necessary task for understanding the effectiveness of instruction and issues of equity in college science, technology, engineering, and mathematics (STEM) courses. Our investigation focused on the implications on claims about student learning that result from choosing between one of two commonly used metrics for analyzing shifts in concept inventories. The metrics are normalized gain (g), which is the most common method used in physics education research and other discipline based education research fields, and Cohen's d, which is broadly used in education research and many other fields. Data for the analyses came from the Learning About STEM Student Outcomes (LASSO) database and included test scores from 4551 students on physics, chemistry, biology, and math concept inventories from 89 courses at 17 institutions from across the United States. We compared the two metrics across all the concept inventories. The results showed that the two metrics lead to different inferences about student learning and equity due to the finding that g is biased in favor of high pretest populations. We discuss recommendations for the analysis and reporting of findings on student learning data.
This paper is part of the Focused Collection on Quantitative Methods in PER: A Critical Examination.] Physics education researchers (PER) often analyze student data with single-level regression models (e.g., linear and logistic regression). However, education datasets can have hierarchical structures, such as students nested within courses, that single-level models fail to account for. The improper use of single-level models to analyze hierarchical datasets can lead to biased findings. Hierarchical models (also known as multilevel models) account for this hierarchical nested structure in the data. In this publication, we outline the theoretical differences between how single-level and multilevel models handle hierarchical datasets. We then present analysis of a dataset from 112 introductory physics courses using both multiple linear regression and hierarchical linear modeling to illustrate the potential impact of using an inappropriate analytical method on PER findings and implications. Research can leverage multi-institutional datasets to improve the field's understanding of how to support student success in physics. There is no post hoc fix, however, if researchers use inappropriate single-level models to analyze multilevel datasets. To continue developing reliable and generalizable knowledge, PER should use hierarchical models when analyzing hierarchical datasets. The Supplemental Material includes a sample dataset, R code to model the building and analysis presented in the paper, and an HTML output from the R code.
Background: High-stakes assessments, such the Graduate Records Examination, have transitioned from paper to computer administration. Low-stakes research-based assessments (RBAs), such as the Force Concept Inventory, have only recently begun this transition to computer administration with online services. These online services can simplify administering, scoring, and interpreting assessments, thereby reducing barriers to instructors' use of RBAs. By supporting instructors' objective assessment of the efficacy of their courses, these services can stimulate instructors to transform their courses to improve student outcomes. We investigate the extent to which RBAs administered outside of class with the online Learning About STEM Student Outcomes (LASSO) platform provide equivalent data to tests administered on paper in class, in terms of both student participation and performance. We use an experimental design to investigate the differences between these two assessment conditions with 1310 students in 25 sections of 3 college physics courses spanning 2 semesters. Results: Analysis conducted using hierarchical linear models indicates that student performance on low-stakes RBAs is equivalent for online (out-of-class) and paper-and-pencil (in-class) administrations. The models also show differences in participation rates across assessment conditions and student grades, but that instructors can achieve participation rates with online assessments equivalent to paper assessments by offering students credit for participating and by providing multiple reminders to complete the assessment. Conclusions: We conclude that online out-of-class administration of RBAs can save class and instructor time while providing participation rates and performance results equivalent to in-class paper-and-pencil tests.
The American Physical Society calls on its members to improve the diversity of physics by supporting an inclusive culture that encourages women and Black, Indigenous, and people of color to become physicists. In the current educational system, it is unlikely for a student to become a physicist if they do not share the same attitudes about what it means to learn and do physics as those held by most professional physicists. Evidence shows college physics courses and degree programs do not support students in developing these attitudes. Rather physics education filters out students who do not enter college physics courses with these attitudes. To better understand the role of attitudes in the lack of diversity in physics, we investigated the intersecting relationships between racism and sexism in inequities in student attitudes about learning and doing physics using a critical quantitative framework. The analyses used hierarchical linear models to examine students' attitudes as measured by the Colorado Learning Attitudes about Science Survey. The data came from the Learning About STEM Student Outcomes database and included 2170 students in 46 calculus-based mechanics courses and 2503 students in 49 algebra-based mechanics courses taught at 18 institutions. Like prior studies, we found that attitudes either did not change or slightly decreased for most groups. Results identified large differences across intersecting race and gender groups representing educational debts society owes these students. White students, particularly White men in calculus-based courses, tended to have more expertlike attitudes than any other group of students. Instruction that addresses society's educational debts can help move physics toward an inclusive culture supportive of diverse students and professionals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.