Many novice programmers view programming tools as allknowing, infallible authorities about what is right and wrong about code. This misconception is particularly detrimental to beginners, who may view the cold, terse, and often judgmental errors from compilers as a sign of personal failure. It is possible, however, that attributing this failure to the computer, rather than the learner, may improve learners' motivation to program. To test this hypothesis, we present Gidget, a game where the eponymous robot protagonist is cast as a fallible character that blames itself for not being able to correctly write code to complete its missions. Players learn programming by working with Gidget to debug its problematic code. In a two-condition controlled experiment, we manipulated Gidget's level of personification in: communication style, sound effects, and image. We tested our game with 116 self-described novice programmers recruited on Amazon's Mechanical Turk and found that, when given the option to quit at any time, those in the experimental condition (with a personable Gidget) completed significantly more levels in a similar amount of time. Participants in the control and experimental groups played the game for an average time of 39.4 minutes (SD=34.3) and 50.1 minutes (SD=42.6) respectively. These finding suggest that how programming tool feedback is portrayed to learners can have a significant impact on motivation to program and learning success.
Figure 1. Gidget's level design mode (the Gidget character is circled). In this mode, learners design their own levels for others to solve. Players write code (left) that can include graphics (right), and see animated results (middle), and graphics for the level are on the right. Corvallis, Oregon, USA Abstract-Although there are many systems designed to engage people in programming, few explicitly teach the subject, expecting learners to acquire the necessary skills on their own as they create programs from scratch. We present a principled approach to teach programming using a debugging game called Gidget, which was created using a unique set of seven design principles. A total of 44 teens played it via a lab study and two summer camps. Principle by principle, the results revealed strengths, problems, and open questions for the seven principles. Taken together, the results were very encouraging: learners were able to program with conditionals, loops, and other programming concepts after using the game for just 5 hours.
We introduce mixed physical and digital authoring environments for children, which invite them to create stories with enriched drawings that are programmed to control robotic characters. These characters respond to the children's drawings as well as to their touch. Children create their stories by drawing props and programming how the robotic character should respond to those props and to physical touch. By drawing, programming the robotic character's behaviors, and organizing and negotiating the order and meanings of the props, children's story events unfold in creative ways. We present our iterative design process of developing and evaluating our prototypes with children. We discuss the role technology can play in supporting children's everyday creative storytelling.
People are increasingly turning to online resources to learn to code. However, despite their prevalence, it is still unclear how successful these resources are at teaching CS1 programming concepts. Using a pretest-posttest study design, we measured the performance of 60 novices before and after they used one of the following, randomly assigned learning activities: 1) complete a Python course on a website called Codecademy, 2) play through and finish a debugging game called Gidget, or 3) use Gidget's puzzle designer to write programs from scratch. The pre-and posttest exams consisted of 24 multiple choice questions that were selected and validated based on data from 1,494 crowdsourced respondents. All 60 of our novices across the three conditions did poorly on the exams overall in both the pre-tests and post-tests (e.g., the best median post-test score was 50% correct). However, those completing the Codecademy course and those playing through the Gidget game showed over a 100% increase in correct answers when comparing their post-test exam scores to their pretest exam scores. Those playing Gidget, however, achieved these same learning gains in half the time. This was in contrast to novices that used the puzzle designer, who did not show any measurable learning gains. All participants performed similarly within their own conditions, regardless of gender, age, or education. These findings suggest that discretionary online educational technologies can successfully teach novices introductory programming concepts (to a degree) within a few hours when explicitly guided by a curriculum.
Many software requirements are identified only after a product is deployed, once users have had a chance to try the software and provide feedback. Unfortunately, addressing such feedback is not always straightforward, even when a team is fully invested in usercentered design. To investigate what constrains a teams evolution decisions, we performed a 6-month field study of a team employing iterative user-centered design methods to the design, deployment and evolution of a web application for a university community. Across interviews with the team, analyses of their bug reports, and further interviews with both users and non-adopters of the application, we found most of the constraints on addressing user feedback emerged from conflicts between users heterogeneous use of information and inflexible assumptions in the team's software architecture derived from earlier user research. These findings highlight the need for new approaches to expressing and validating assumptions from user research as software evolves.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.