Many factors have been cited for poor performance of students in CS1. To investigate how assessment mechanisms may impact student performance, nine experienced CS1 instructors reviewed final examinations from a variety of North American institutions. The majority of the exams reviewed were composed predominantly of high-value, integrative codewriting questions, and the reviewers regularly underestimated the number of CS1 concepts required to answer these questions. An evaluation of the content and cognitive requirements of individual questions suggests that in order to succeed, students must internalize a large amount of CS1 content. This emphasizes the need for focused assessment techniques to provide students with the opportunity to demonstrate their knowledge.
No abstract
This case study explores an inverted classroom offering of an introductory programming course (CS1). Students prepared for lecture by watching short lecture videos and completing required in-video quiz questions. During lecture, the students worked through exercises with the support of the instructor and teaching assistants. We describe the course implementation and its assessment, including pre-and postcourse surveys. We also discuss lessons learned, modifications that we plan to make for the next offering, and recommendations for others teaching inverted courses.
We compare a traditional CS1 offering with an inverted offering delivered the following year to a comparable student population. We measure student attitudes, grades, and final course outcomes and find that, while students in the inverted offering do not report increased enjoyment and are no more likely to pass, learning as measured by final exam performance increases significantly. This increase is not simply a function of a more onerous inverted offering, as students report spending similar time per week in the traditional and inverted offerings. Contrary to our hypotheses, however, we find no evidence that the the inverted offering disproportionally helps beginners or those not fully fluent in English.
Problem Technology can transform health care; future physicians need to keep pace to ensure optimal patient care. Because future doctors are poorly prepared in computer literacy, the authors designed a computer programming certificate course. This Innovation Report describes the course and findings from a qualitative study to understand the ways it prepares medical students to use computing science and technology in medicine. Approach The 14-month Computing for Medicine certificate course (C4M, offered beginning in February 2016), University of Toronto, is comprised of hands-on workshops to introduce programming accompanied by homework exercises, seminars by computer science experts on the application of programming to medicine, and coding projects. Using purposive and maximal variation sampling, 17 students who completed the course were interviewed from April–May 2017. Thematic analysis was performed using an iterative constant comparison approach. Outcomes Participants praised the C4M as an opportunity to achieve computer literacy—including language, syntax, and fundamental computational ideas (and their application to medicine)—and acquire or strengthen algorithmic and logical thinking skills for approaching problems. They highlighted that the course illustrated linkages between computer science and medicine. Participants acknowledged a sometimes-existent chasm between producers and users of technology in medicine, recommending two-way communication between the disciplines when developing technology for use in medicine. Next Steps We recommend that medical schools consider computer literacy an essential skill to foster future collaborative computing partnerships for improved technology use by physicians and optimal patient care. We encourage further evaluation of future iterations of the C4M and similar courses.
In this paper, we explore the use of sequences of small code writing questions ("concept questions") designed to incrementally evaluate single programming concepts. We report on a study of student performance on a CS1 final examination that included a traditional code-writing question and four intentionally corresponding concept questions. We find that the concept questions are significant predictors of performance on both the corresponding code-writing question and the final exam as a whole. We argue that concept questions provide more accurate formative feedback and simplify marking by reducing the number of variants that must be considered. An analysis of responses categorized by the students' previous programming experience suggests that inexperienced students have the most to gain from the use of concept questions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.