Code reading is an important skill in programming. Inspired by the linearity that people exhibit while natural language text reading, we designed local and global gaze-based measures to characterize linearity (left-to-right and top-to-bottom) in reading source code. Unlike natural language text, source code is executable and requires a specific reading approach. To validate these measures, we compared the eye movements of novice and expert programmers who were asked to read and comprehend short snippets of natural language text and Java programs.Our results show that novices read source code less linearly than natural language text. Moreover, experts read code less linearly than novices. These findings indicate that there are specific differences between reading natural language and source code, and suggest that non-linear reading skills increase with expertise. We discuss the implications for practitioners and educators.
Abstract. Graphic data models are commonly used as a tool for presentation of information structures in the design, implementation, use and maintenance of the databases that support information systems. The methods proposed for database design assume that the use of graphic data models will enhance understanding of system specifications by both the end‐users and the implementers of the system. For this assumption to hold, the information presented in the graphic data model must be readily comprehensible so that the design, represented by the model, can be confirmed and implemented correctly. The lack of standard representations for graphic models has led to a variety of graphic styles. To date, there has been little focus on studying the effect graphic style has on model comprehension. We have studied the effect of three graphic styles proposed for data models on model legibility and interpretation. Our study shows a significant variation in model interpretation that can be attributed to the graphic syntax used. Graphic style appears to influence which model elements are included in the interpretation, as well as the way data models are read.
This research investigates university students' determinations of credibility of information on Web sites, confidence in their determinations, and perceptions of Web site authors' vested interests. In Study 1, university-level computer science and education students selected Web sites determined to be credible and Web sites that exemplified misrepresentations. Categorization of Web site credibility determinations indicated that the most frequently provided reasons associated with high credibility included information focus or relevance, educational focus, and name recognition. Reasons for knowing a Web site's content is wrong included lack of corroboration with other information, information focus and bias. Vested interests associated with commercial Web sites were regarded with distrust and vested interests of educational Web sites were not. In Study 2, credibility determinations of university students enrolled in computer science courses were examined for 3 provided Web sites dealing with the same computer science topic. Reasons for determining Web site inaccuracy included own expertise, information corroboration, information design and bias. As in Study 1, commercial vested interests were negatively regarded in contrast to educational interests. Instructional implications and suggestions for further research are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.