The ability to make decisions based on data, with its inherent uncertainties and variability, is a complex and vital skill in the modern world. The need for such quantitative critical thinking occurs in many different contexts, and although it is an important goal of education, that goal is seldom being achieved. We argue that the key element for developing this ability is repeated practice in making decisions based on data, with feedback on those decisions. We demonstrate a structure for providing suitable practice that can be applied in any instructional setting that involves the acquisition of data and relating that data to scientific models. This study reports the results of applying that structure in an introductory physics laboratory course. Students in an experimental condition were repeatedly instructed to make and act on quantitative comparisons between datasets, and between data and models, an approach that is common to all science disciplines. These instructions were slowly faded across the course. After the instructions had been removed, students in the experimental condition were 12 times more likely to spontaneously propose or make changes to improve their experimental methods than a control group, who performed traditional experimental activities. The students in the experimental condition were also four times more likely to identify and explain a limitation of a physical model using their data. Students in the experimental condition also showed much more sophisticated reasoning about their data. These differences between the groups were seen to persist into a subsequent course taken the following year.critical thinking | scientific reasoning | scientific teaching | teaching experimentation | undergraduate education A central goal of science education is to teach students to think critically about scientific data and models. It is crucial for scientists, engineers, and citizens in all walks of life to be able to critique data, to identify whether or not conclusions are supported by evidence, and to distinguish a significant effect from random noise and variability. There are many indications of how difficult it is for people to master this type of thinking, as evidenced by many societal debates. Although teaching quantitative critical thinking is a fundamental goal of science education, particularly the laboratory portion, the evidence indicates this is seldom, if ever, being achieved (1-6). To address this educational need, we have analyzed the explicit cognitive processes involved in such critical thinking and then developed an instructional design to incorporate these processes.We argue that scientists engage in such critical thinking through a process of repeated comparisons and decisions: comparing new data to existing data and/or models and then deciding how to act on those comparisons based on analysis tools that embody appropriate statistical tests. Those actions typically lead to further iterations involving improving the data and/or modifying the experiment or model. In a research settin...
We have analyzed the impact of taking an associated lab course on the scores on final exam questions in two large introductory physics courses. Approximately a third of the students who completed each course also took an accompanying instructional lab course. The lab courses were fairly conventional, although they focused on supporting the mastery of a subset of the introductory physics topics covered in the associated course. Performance between students who did and did not take the lab course was compared using final exam questions from the associated courses that related to concepts from the lab courses. The population of students who took the lab in each case was somewhat different from those who did not enroll in the lab course in terms of background and major. Those differences were taken into account by normalizing their performance on the lab-related questions with scores on the exam questions that did not involve material covered in the lab. When normalized in this way, the average score on lab-related questions of the students who took the lab, in both courses, was within 1% of the score of students who did not, with an uncertainty of 2%. This result raises questions as to the effectiveness of labs at supporting mastery of physics content.
Research reveals that labs are more effective when their goal is to teach experimental practices rather than to reinforce classroom instruction.
Instructional labs are widely seen as a unique, albeit expensive, way to teach scientific content. We measured the effectiveness of introductory lab courses at achieving this educational goal across nine different lab courses at three very different institutions. These institutions and courses encompassed a broad range of student populations and instructional styles. The nine courses studied had two key things in common: the labs aimed to reinforce the content presented in lectures, and the labs were optional. By comparing the performance of students who did and did not take the labs (with careful normalization for selection effects), we found universally and precisely no added value to learning course content from taking the labs as measured by course exam performance. This work should motivate institutions and departments to reexamine the goals and conduct of their lab courses, given their resource-intensive nature. We show why these results make sense when looking at the comparative mental processes of students involved in research and instructional labs, and offer alternative goals and instructional approaches that would make lab courses more educationally valuable.
[This paper is part of the Focused Collection on Gender in Physics.] It is established that male students outperform female students on almost all commonly used physics concept inventories. However, there is significant variation in the factors that contribute to the gap, as well as the direction in which they influence it. It is presently unknown if such a gender gap exists on the relatively new Concise Data Processing Assessment (CDPA) and, therefore, whether gendered actions in the teaching lab might influence-or be influenced by-the gender gap. To begin to get an estimates of the gap, its predictors, and its correlates, we have measured performance on the CDPA at the pretest and post-test level. We have also made observations of how students in mixed-gender partnerships divide their time in the lab. We find a gender gap on the CDPA that persists from pre-to post-test and that is as big as, if not bigger than, similar reported gaps. We also observe compelling differences in how students divide their time in the lab. In mixed-gender pairs, male students tend to monopolize the computer, female and male students tend to share the equipment equally, and female students tend to spend more time on other activities that are not the equipment or computer, such as writing or speaking to peers. We also find no correlation between computer use, when students are presumably working with their data, and performance on the CDPA post-test. In parallel to our analysis, we scrutinize some of the more commonly used approaches to similar data. We argue in favor of more explicitly checking the assumptions associated with the statistical methods that are used and improved reporting and contextualization of effect sizes. Ultimately, we claim no evidence that female students are less capable of learning than their male peers, and we suggest caution when using gain measures to draw conclusions about differences in science classroom performance across gender.
Introductory physics lab instruction is undergoing a transformation, with increasing emphasis on developing experimentation and critical thinking skills. These changes present a need for standardized assessment instruments to determine the degree to which students develop these skills through instructional labs. In this article, we present the development and validation of the Physics Lab Inventory of Critical thinking (PLIC). We define critical thinking as the ability to use data and evidence to decide what to trust and what to do. The PLIC is a 10-question, closed-response assessment that probes student critical thinking skills in the context of physics experimentation. Using interviews and data from 5584 students at 29 institutions, we demonstrate, through qualitative and quantitative means, the validity and reliability of the instrument at measuring student critical thinking skills. This establishes a valuable new assessment instrument for instructional labs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.