Software developers make programming mistakes that cause serious bugs for their customers. Existing work to detect problematic software focuses mainly on post hoc identification of correlations between bug fixes and code. We propose a new approach to address this problem -detect when software developers are experiencing difficulty while they work on their programming tasks, and stop them before they can introduce bugs into the code.In this paper, we investigate a novel approach to classify the difficulty of code comprehension tasks using data from psycho-physiological sensors. We present the results of a study we conducted with 15 professional programmers to see how well an eye-tracker, an electrodermal activity sensor, and an electroencephalography sensor could be used to predict whether developers would find a task to be difficult. We can predict nominal task difficulty (easy/difficult) for a new developer with 64.99% precision and 64.58% recall, and for a new task with 84.38% precision and 69.79% recall. We can improve the Naive Bayes classifier's performance if we trained it on just the eye-tracking data over the entire dataset, or by using a sliding window data collection schema with a 55 second time window. Our work brings the community closer to a viable and reliable measure of task difficulty that could power the next generation of programming support tools.
Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with "equivalent" 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target.
Sensory information must be processed selectively in order to represent the world and guide behavior. How does such selection occur? Here we consider two alternative classes of selection mechanisms: In blocking, unattended stimuli are blocked entirely from access to downstream processes, and in attenuation, unattended stimuli are reduced in strength but if strong enough can still access downstream processes. Existing evidence as to whether blocking or attenuation is a more accurate model of human performance is mixed. Capitalizing on a general distinction between blocking and attenuation-blocking cannot be overcome by strong stimuli, whereas attenuation can-we measured how attention interacted with the strength of stimuli in two spatial selection paradigms, spatial filtering and spatial monitoring. The evidence was consistent with blocking for the filtering paradigm and with attenuation for the monitoring paradigm. This approach provides a general measure of the fate of unattended stimuli.
The present study investigated in detail the circumstances and patterns of injury experienced by adults and children in retail store environments in the United States. Data from the CPSC's National Electronic Injury Surveillance System (NEISS) database were analyzed, yielding a total national estimate of 85,403 injuries occurring inside stores for the year 2012. Injuries were analyzed by severity, type of injury, accident mode, objects/conditions involved in the accident, and age of the injured. The majority of store-related injuries were not severe, and did not involve interaction with store-related objects or factors unique to store environments. The most common accident modes were general falls, unintentional contact with an object or person, falls from objects (such as carts) and slips. Children under the age of 5 had the highest rates of injury, most commonly resulting from falls from shopping carts. Adults over the age of 65 had the second highest rates of injury, typically resulting from general falls not caused by specific store-related objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.