The findings have implications for the development of technologies, applications, and user interfaces for flexible displays and the design of visual display devices.
Digital images taken by mobile phones are the most frequent class of images created today. Due to their omnipresence and the many ways they are encountered, they require a specific focus in research. However, to date, there is no systematic compilation of the various factors that may determine our evaluations of such images, and thus no explanation of how users select and identify relatively “better” or “worse” photos. Here, we propose a theoretical taxonomy of factors influencing the aesthetic appeal of mobile phone photographs. Beyond addressing relatively basic/universal image characteristics, perhaps more related to fast (bottom-up) perceptual processing of an image, we also consider factors involved in the slower (top-down) re-appraisal or deepened aesthetic appreciation of an image. We span this taxonomy across specific types of picture genres commonly taken—portraits of other people, selfies, scenes and food. We also discuss the variety of goals, uses, and contextual aspects of users of mobile phone photography. As a working hypothesis, we propose that two main decisions are often made with mobile phone photographs: (1) Users assess images at a first glance—by swiping through a stack of images—focusing on visual aspects that might be decisive to classify them from “low quality” (too dark, out of focus) to “acceptable” to, in rare cases, “an exceptionally beautiful picture.” (2) Users make more deliberate decisions regarding one’s “favorite” picture or the desire to preserve or share a picture with others, which are presumably tied to aspects such as content, framing, but also culture or personality, which have largely been overlooked in empirical research on perception of photographs. In sum, the present review provides an overview of current focal areas and gaps in research and offers a working foundation for upcoming research on the perception of mobile phone photographs as well as future developments in the fields of image recording and sharing technology.
Most spatially interlacing stereoscopic 3D displays display odd and even rows of an image to either the left or right eye of the viewer. The visual system then fuses the interlaced image into a single percept. This row-based interlacing creates a small vertical disparity between the images; however, interlacing may also induce horizontal disparities, thus generating depth artifacts. Whether people perceive the depth artifacts, and if so, what is the magnitude of the artifacts, are unknown. In this study, we hypothesized and tested if people perceive interlaced edges on different depth levels. We tested oblique edge orientations ranging from 2• to 32• and pixel sizes ranging from 16 to 79 arcsec of visual angle in a depth probe experiment. Five participants viewed the visual stimuli through a stereoscope under three viewing conditions: non-interlaced, interlaced, and row-averaged (i.e., where even and odd rows are averaged). Our results indicated that people perceive depth artifacts when viewing interlaced stereoscopic images and that these depth artifacts increase with pixel size and decrease with edge orientation angle. A pixel size of 32 arcsec of visual angle still evoked depth percepts, whereas 16 arcsec did not. Rowaveraging images effectively eliminated these depth artifacts. These findings have implications for display design, content production, image quality studies, and stereoscopic games and software.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.