Psychophysiological methods, such as electroencephalography (EEG), provide reliable high-resolution measurements of affective player experience. In this article, the authors present a psychophysiological pilot study and its initial results to solidify a research approach they call affective ludology, a research area concerned with the physiological measurement of affective responses to player-game interaction. The study investigates the impact of level design on brainwave activity measured with EEG and on player experience measured with questionnaires. The goal of the study was to investigate cognition, emotion, and player behavior from a psychological perspective. For this purpose, a methodology for assessing gameplay experience with subjective and objective measures was developed extending prior work in physiological measurements of affect in digital gameplay. The authors report the result of this pilot study, the impact of three different level design conditions (boredom, immersion, and flow) on EEG, and subjective indicators of gameplay experience. Results from the subjective gameplay experience questionnaire support the validity of our level design hypotheses. Patterns of EEG spectral power show that the immersion-level design elicits more activity in the theta band, which may support a relationship between virtual spatial navigation or exploration and theta activity. The research shows that facets of gameplay experience can be assessed with affective ludology measures, such as EEG, in which cognitive and affective patterns emerge from different level designs.
Gaze visualizations represent an effective way for gaining fast insights into eye tracking data. Current approaches do not adequately support eye tracking studies for three-dimensional (3D) virtual environments. Hence, we propose a set of advanced gaze visualization techniques for supporting gaze behavior analysis in such environments. Similar to commonly used gaze visualizations for twodimensional stimuli (e.g., images and websites), we contribute advanced 3D scan paths and 3D attentional maps. In addition, we introduce a models of interest timeline depicting viewed models, which can be used for displaying scan paths in a selected time segment. A prototype toolkit is also discussed which combines an implementation of our proposed techniques. Their potential for facilitating eye tracking studies in virtual environments was supported by a user study among eye tracking and visualization experts.
Remote pan-and-zoom control for the exploration of large information spaces is of interest for various application areas, such as browsing through medical data in sterile environments or investigating geographic information systems on a distant display. In this context, considering a user's visual attention for pan-and-zoom operations could be of interest. In this paper, we investigate the potential of gaze-supported panning in combination with different zooming modalities: (1) a mouse scroll wheel, (2) tilting a handheld device, and (3) touch gestures on a smartphone. Thereby, it is possible to zoom in at a location a user currently looks at (i.e., gaze-directed pivot zoom). These techniques have been tested with Google Earth by ten participants in a user study. While participants were fastest with the already familiar mouse-only base condition, the user feedback indicates a particularly high potential of the gaze-supported pivot zooming in combination with a scroll wheel or touch gesture.
Envisioning, designing, and implementing the user interface require a comprehensive understanding of interaction technologies. In this forum we scout trends and discuss new technologies with the potential to influence interaction design. --- Albrecht Schmidt, Editor
Since eye gaze may serve as an efficient and natural input for steering in virtual 3D scenes, we investigate the design of eye gaze steering user interfaces (UIs) in this paper. We discuss design considerations and propose design alternatives based on two selected steering approaches differing in input condition (discrete vs. continuous) and velocity selection (constant vs. gradient-based). The proposed UIs have been iteratively advanced based on two user studies with twelve participants each. In particular, the combination of continuous and gradient-based input shows a high potential, because it allows for gradually changing the moving speed and direction depending on a user's point-of-regard. This has the advantage of reducing overshooting problems and dwell-time activations. We also investigate discrete constant input for which virtual buttons are toggled using gaze dwelling. As an alternative, we propose the Sticky Gaze Pointer as a more flexible way of discrete input.
When working with zoomable information spaces, we can distinguish complex tasks into primary and secondary tasks (e.g., pan and zoom). In this context, a multimodal combination of gaze and foot input is highly promising for supporting manual interactions, for example, using mouse and keyboard. Motivated by this, we present several alternatives for multimodal gaze-supported foot interaction in a computer desktop setup for pan and zoom. While our eye gaze is ideal to indicate a user's current point of interest and where to zoom in, foot interaction is well suited for parallel input controls, for example, to specify the zooming speed. Our investigation focuses on varied foot input devices differing in their degree of freedom (e.g., one-and two-directional foot pedals) that can be seamlessly combined with gaze input.
Digital games are increasingly profiting from sensing technologies. However, their focus is mostly on sensing limb movements. We propose that sensing capabilities could also be used to engage players with proxemics: the interpersonal distance between players. We further add that wireless networks offer complementary distance zones for designers, offering novel design resources for digital play. We use our own as well as other games to articulate a set of strategies on how designers can utilize both proxemics and the new wireless proxemics to facilitate novel play experiences. Ultimately, with our work, we aim to expand the range of digital play.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.