Cybersickness still poses a significant challenge to the widespread usage of virtual reality, leading to different levels of discomfort and potentially breaking the immersive experience. Researchers have attempted to discover the possible fundamental causes of cybersickness for years. Despite the longstanding interest in the research field, inconsistent results have been drawn on the contributing factors and solutions to combating cybersickness. Moreover, little attention has been paid to individual susceptibility. A consolidated explanation remains under development, requiring more empirical studies with robust and reproducible methodologies. This review presents an integrated survey connecting the findings from previous review papers and the state of the art involving empirical studies and participants. A literature review is then presented, focusing on the practical studies of different contributing factors, the pros and cons of measurements, profiles of cybersickness, and solutions to reduce this phenomenon. Our findings suggest a lack of considerations regarding user susceptibility and gender balance in between groups studies. In addition, incongruities among empirical findings raised concerns. We conclude by suggesting points of insights for future empirical investigations.
The feeling of horror within movies or games relies on the audience's perception of a tense atmosphere-often achieved through sound accompanied by the on-screen drama-guiding its emotional experience throughout the scene or game-play sequence. These progressions are often crafted through an a priori knowledge of how a scene or game-play sequence will playout, and the intended emotional patterns a game director wants to transmit. The appropriate design of sound becomes even more challenging once the scenery and the general context is autonomously generated by an algorithm. Towards realizing sound-based affective interaction in games this paper explores the creation of computational models capable of ranking short audio pieces based on crowdsourced annotations of tension, arousal and valence. Affect models are trained via preference learning on over a thousand annotations with the use of support vector machines, whose inputs are low-level features extracted from the audio assets of a comprehensive sound library. The models constructed in this work are able to predict the tension, arousal and valence elicited by sound, respectively, with an accuracy of approximately 65%, 66% and 72%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.