In this paper, we report a study that examines the relationship between image-based computational analyses of web pages and users' aesthetic judgments about the same image material. Web pages were iteratively decomposed into quadrants of minimum entropy (quadtree decomposition) based on low-level image statistics, to permit a characterization of these pages in terms of their respective organizational symmetry, balance and equilibrium. These attributes were then evaluated for their correlation with human participants' subjective ratings of the same web pages on four aesthetic and affective dimensions. Several of these correlations were quite large and revealed interesting patterns in the relationship between low-level (i.e., pixel-level) image statistics and designrelevant dimensions.
Computational aesthetics has become an active research field in recent years, but there have been few attempts in computational aesthetic evaluation of logos. In this article, we restrict our study on black-and-white logos, which are professionally designed for name-brand companies with similar properties, and apply perceptual models of standard design principles in computational aesthetic evaluation of logos. We define a group of metrics to evaluate some aspects in design principles such as balance, contrast, and harmony of logos. We also collect human ratings of balance, contrast, harmony, and aesthetics of 60 logos from 60 volunteers. Statistical linear regression models are trained on this database using a supervised machine-learning method. Experimental results show that our model-evaluated balance, contrast, and harmony have highly significant correlation of over 0.87 with human evaluations on the same dimensions. Finally, we regress human-evaluated aesthetics scores on model-evaluated balance, contrast, and harmony. The resulted regression model of aesthetics can predict human judgments on perceived aesthetics with a high correlation of 0.85. Our work provides a machine-learning-based reference framework for quantitative aesthetic evaluation of graphic design patterns and also the research of exploring the relationship between aesthetic perceptions of human and computational evaluation of design principles extracted from graphic designs.
1. Past research in a number of fields confirms the existence of a link between cognition and eye movement control, beyond simply a pointing relationship. This being the case, it should be possible to use eye movement recording as a basis for detecting users' cognitive states in real time. Several examples of such cognitive state detectors have been reported in the literature.
2.A multi-disciplinary project is described in which the goal is to provide the computer with as much real-time information about the human state (cognitive, affective and motivational state) as possible, and to base computer actions on this information. The application area in which this is being implemented is science education, learning about gears through exploration. Two studies are reported in which participants solve simple problems of pictured gear trains while their eye movements are recorded. The first study indicates that most eye movement sequences are compatible with predictions of a simple sequential cognitive model, and it is suggested that those sequences that do not fit the model may be of particular interest in the HCI context as indicating problems or alternative mental strategies. The mental rotation of gears sometimes produces sequences of short eye movements in the direction of motion; thus, such sequences may be useful as cognitive state detectors.3. The second study tested the hypothesis that participants are thinking about the object to which their eyes are directed. In this study, the display was turned off partway through the process of solving a problem, and the participants reported what they were thinking about at that time. While in most cases the participants reported cognitive activities involving the fixated object, this was not the case on a sizeable number of trials.
The change detection paradigm was used in a single-monitor driving simulator to study drivers' awareness of other vehicles on the roadway. While driving, a moving or parked vehicle ahead (30 or 60 m) would occasionally change location (30% or 60% nearer or farther away), color, or identity during a 150 ms blank-out period. The results showed that only for moving vehicles with a very large location displacement (60%) can the participants detect changes as well as with the color or identity change. We also examined how the driving task itself influences the formation of this representation; overall, the detection performance was better in the non-driving condition. We argue that vehicle location is coarsely represented in drivers' memory, and that this, together with vehicle features, is used to visually monitor more finegrained location information. This helps explain why drivers often fail to notice a decreasing distance to the car ahead, resulting in rear-end collisions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.