To test the hypothesis that lecturing maximizes learning and course performance, we metaanalyzed 225 studies that reported data on examination scores or failure rates when comparing student performance in undergraduate science, technology, engineering, and mathematics (STEM) courses under traditional lecturing versus active learning. The effect sizes indicate that on average, student performance on examinations and concept inventories increased by 0.47 SDs under active learning (n = 158 studies), and that the odds ratio for failing was 1.95 under traditional lecturing (n = 67 studies). These results indicate that average examination scores improved by about 6% in active learning sections, and that students in classes with traditional lecturing were 1.5 times more likely to fail than were students in classes with active learning. Heterogeneity analyses indicated that both results hold across the STEM disciplines, that active learning increases scores on concept inventories more than on course examinations, and that active learning appears effective across all class sizes-although the greatest effects are in small (n ≤ 50) classes. Trim and fill analyses and fail-safe n calculations suggest that the results are not due to publication bias. The results also appear robust to variation in the methodological rigor of the included studies, based on the quality of controls over student quality and instructor identity. This is the largest and most comprehensive metaanalysis of undergraduate STEM education published to date. The results raise questions about the continued use of traditional lecturing as a control in research studies, and support active learning as the preferred, empirically validated teaching practice in regular classrooms.constructivism | undergraduate education | evidence-based teaching | scientific teaching L ecturing has been the predominant mode of instruction since universities were founded in Western Europe over 900 y ago (1). Although theories of learning that emphasize the need for students to construct their own understanding have challenged the theoretical underpinnings of the traditional, instructorfocused, "teaching by telling" approach (2, 3), to date there has been no quantitative analysis of how constructivist versus exposition-centered methods impact student performance in undergraduate courses across the science, technology, engineering, and mathematics (STEM) disciplines. In the STEM classroom, should we ask or should we tell?Addressing this question is essential if scientists are committed to teaching based on evidence rather than tradition (4). The answer could also be part of a solution to the "pipeline problem" that some countries are experiencing in STEM education: For example, the observation that less than 40% of US students who enter university with an interest in STEM, and just 20% of STEM-interested underrepresented minority students, finish with a STEM degree (5).To test the efficacy of constructivist versus exposition-centered course designs, we focused on the design of clas...
The authors explore the transferability of an active-learning intervention and expand upon the original studies by 1) disaggregating student populations to identify for whom the intervention works best and 2) exploring possible proximate mechanisms (changes in student behaviors and perceptions) that could mediate the observed increase in achievement.
Across all sciences, the quality of measurements is important. Survey measurements are only appropriate for use when researchers have validity evidence within their particular context. Yet, this step is frequently skipped or is not reported in educational research. This article briefly reviews the aspects of validity that researchers should consider when using surveys. It then focuses on factor analysis, a statistical method that can be used to collect an important type of validity evidence. Factor analysis helps researchers explore or confirm the relationships between survey items and identify the total number of dimensions represented on the survey. The essential steps to conduct and interpret a factor analysis are described. This use of factor analysis is illustrated throughout by a validation of Diekman and colleagues’ goal endorsement instrument for use with first-year undergraduate science, technology, engineering, and mathematics students. We provide example data, annotated code, and output for analyses in R, an open-source programming language and software environment for statistical computing. For education researchers using surveys, understanding the theoretical and statistical underpinnings of survey validity is fundamental for implementing rigorous education research.
Although females outnumber males in biology, this study of 23 different introductory biology classrooms reveals systematic gender disparities in student performance on exams and student participation when instructors ask students to volunteer answers to instructor-posed questions.
The authors developed and assessed an innovative course-based undergraduate research experience that emphasized collaboration among students and focused on data analysis.
Women who start college in one of the natural or physical sciences leave in greater proportions than their male peers. The reasons for this difference are complex, and one possible contributing factor is the social environment women experience in the classroom. Using social network analysis, we explore how gender influences the confidence that college-level biology students have in each other’s mastery of biology. Results reveal that males are more likely than females to be named by peers as being knowledgeable about the course content. This effect increases as the term progresses, and persists even after controlling for class performance and outspokenness. The bias in nominations is specifically due to males over-nominating their male peers relative to their performance. The over-nomination of male peers is commensurate with an overestimation of male grades by 0.57 points on a 4 point grade scale, indicating a strong male bias among males when assessing their classmates. Females, in contrast, nominated equitably based on student performance rather than gender, suggesting they lacked gender biases in filling out these surveys. These trends persist across eleven surveys taken in three different iterations of the same Biology course. In every class, the most renowned students are always male. This favoring of males by peers could influence student self-confidence, and thus persistence in this STEM discipline.
[This paper is part of the Focused Collection on Gender in Physics.] This focused collection explores inequalities in the experiences of women in physics. Yet, it is important for researchers to also be aware of and draw insights from common patterns in the experiences of women across science, technology, engineering and mathematics (STEM) disciplines. Here, we review studies on gender disparities across college STEM on measures that have been correlated with retention. These include disparities in academic performance, engagement, self-efficacy, belonging, and identity. We argue that observable factors such as persistence, performance, and engagement can inform researchers about what populations are disadvantaged in a STEM classroom or program, but we need to measure underlying mechanisms to understand how these inequalities arise. We present a framework that helps connect larger sociocultural factors, including stereotypes and gendered socialization, to student affect and observable behaviors in STEM contexts. We highlight four mechanisms that demonstrate how sociocultural factors could impact women in STEM classrooms and majors. We end with a set of recommendations for how we can more holistically evaluate the experiences of women in STEM to help mitigate the underlying inequities instead of applying a quick fix.
PORTAAL, a new evidence-based classroom observation tool, identifies 21 elements of classroom best practices for active learning that have been correlated with positive student outcomes in the education literature. After only 5 h of training, instructors can reliably use this tool to determine their alignment with these teaching practices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.