The physics instruction at UC Davis for life science majors takes place in a long-standing reformed large-enrollment physics course in which the discussion or laboratory instructors (primarily graduate student teaching assistants) implement the interactive-engagement (IE) elements of the course. Because so many different instructors participate in disseminating the IE course elements, we find it essential to the instructors' professional development to observe and document the studentinstructor interactions within the classroom. Out of this effort, we have developed a computerized real-time instructor observation tool (RIOT) to take data of student-instructor interactions. We use the RIOT to observe 29 different instructors for 5 hours each over the course of one quarter, for a total of about 150 hours of class time, finding that the range of instructor behaviors is more extreme than previously assumed. In this paper, we introduce RIOT and describe how the variation present across 29 different instructors can provide students in the same course with significantly different course experiences.
This paper describes our large reformed introductory physics course at UC Davis, which bioscience students have been taking since 1996. The central feature of this course is a focus on sense-making by the students during the 5 h per week discussion/labs in which the students take part in activities emphasizing peer-peer discussions, argumentation, and presentations of ideas. The course differs in many fundamental ways from traditionally taught introductory physics courses. After discussing the unique features of CLASP and its implementation at UC Davis, various student outcome measures are presented that show increased performance by students who took the CLASP course compared to students who took a traditionally taught introductory physics course. Measures we use include upper-division GPAs, MCAT scores, FCI gains, and MPEX-II scores.
In deciding on a student's grade in a class, an instructor generally needs to combine many individual grading judgments into one overall judgment. Two relatively common numerical scales used to specify individual grades are the 4-point scale (where each whole number 0-4 corresponds to a letter grade) and the percent scale (where letter grades A through D are uniformly distributed in the top 40% of the scale). This paper uses grading data from a single series of courses offered over a period of 10 years to show that the grade distributions emerging from these two grade scales differed in many ways from each other. Evidence suggests that the differences are due more to the grade scale than to either the students or the instructors. One major difference is that the fraction of students given grades less than C− was over 5 times larger when instructors used the percent scale. The fact that each instructor who used both grade scales gave more than 4 times as many of these low grades under percent scale grading suggests that the effect is due to the grade scale rather than the instructor. When the percent scale was first introduced in these courses in 2006, one of the authors of this paper, who is also one of the instructors in this dataset, had confidently predicted that any changes in course grading would be negligible. They were not negligible, even for this instructor.
When assessing student work, graders will often find that some students will leave one or more problems blank on assessments. Since there is no work shown, the grader has no means to evaluate the student's understanding of a particular problem, and thus awards zero points. This practice punishes the student behavior of leaving a problem blank, but this zero is not necessarily an accurate assessment of student understanding of a particular topic. While some might argue that this grading practice is "fair" in that students know that they can't receive points for answers they don't submit, we share evidence that different student groups engage in blank-leaving behavior at different rates and are therefore unequally impacted. We analyze 10 years of UC Davis introductory physics course databases to show that different groups of students skip problems and entire exams at different rates. We also share some implications for grading practices.
This Resource Letter provides a guide to research-based assessment instruments (RBAIs) for physics and astronomy classes, but goes beyond physics and astronomy topics to include: attitudes and beliefs about physics, epistemologies and expectations, the nature of physics, problem solving, self-efficacy, reasoning skills and lab skills. We also discuss RBAIs in physics and astronomy cognate fields such as mathematics and observation protocols for standardized observation of teaching. In this Resource Letter, we present an overview of these assessments and surveys including research validation, instructional level, format, and themes, to help faculty find the assessment that most closely matches their goals. This Resource Letter is a companion to RBAI-1: Research-based Assessment Instruments in Physics and Astronomy, which explicitly dealt with physics and astronomy topics. More details about each RBAI discussed in this paper are available at PhysPort: physport.org/assessments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.