This paper presents a study that allows users to define intuitive gestures to navigate a humanoid robot. For eleven navigational commands, 385 gestures, performed by 35 participants, were analyzed. The results of the study reveal user-defined gesture sets for both novice users and expert users. In addition, we present, a taxonomy of the userdefined gesture sets, agreement scores for the gesture sets, time performances of the gesture motions, and present implications to the design of the robot control, with a focus on recognition and user interfaces.
This paper presents a framework that allows users to interact with and navigate a humanoid robot using body gestures. The first part of the paper describes a study to define intuitive gestures for eleven navigational commands based on analyzing 385 gestures performed by 35 participants. From the study results, we present a taxonomy of the user-defined gesture sets, agreement scores for the gesture sets, and time performances of the gesture motions. The second part of the paper presents a full body interaction system for recognizing the user-defined gestures. We evaluate the system by recruiting 22 participants to test for the accuracy of the proposed system. The results show that most of the defined gestures can be successfully recognized with a precision between 86−100 % and an accuracy between 73−96 %. We discuss the limitations of the system and present future work improvements.Markerless body tracking technologies based on depth sensors allowed researchers to have an easy-to-use platform for developing algorithms for recognizing full body gestures and postures in real time [1,2]. Recently, researchers are increas-M. Obaid (B) t2i Lab,
Abstract. For improving full body interaction in an interactive storytelling scenario, we conducted a study to get a user-defined gesture set. 22 users performed 251 gestures while running through the story script with real interaction disabled, but with hints of what set of actions was currently requested by the application. We describe our interaction design process, starting with the conduction of the study, continuing with the analysis of the recorded data including the creation of gesture taxonomy and the selection of gesture candidates, and ending with the integration of the gestures in our application.
Automatic detection and interpretation of social signals carried by voice, gestures, mimics, etc. will play a key-role for next-generation interfaces as it paves the way towards a more intuitive and natural human-computer interaction. The paper at hand introduces Social Signal Interpretation (SSI), a framework for real-time recognition of social signals. SSI supports a large range of sensor devices, filter and feature algorithms, as well as, machine learning and pattern recognition tools. It encourages developers to add new components using SSI's C++ API, but also addresses front end users by offering an XML interface to build pipelines with a text editor. SSI is freely available under GPL at http://openssi.net.
This paper presents a pilot evaluation study that investigates the physiological response of users when interacting with virtual agents that resemble cultural behaviors in an Augmented Reality environment. In particular, we analyze users from the Arab and German cultural backgrounds. The initial results of our analysis are promising and show that users tend to have a higher physiological arousal towards virtual agents that do not exhibit behaviors of their cultural background.
Game Books can offer a well-written, but non-linear story, as readers always have to decide, how to continue after reading a text passage. It seems very logical to adopt such a book to investigate interaction paradigms for an interactive storytelling scenario. Nevertheless, it is not easy to keep the player motivated during a long-winded narrated story until the next point of intervention is reached. In this paper we tested different methods of implementing the decision process in such a scenario using speech input and tested it with 26 participants during a two player scenario. This revealed that with an omitted on-screen prompt the application was less easy to use, but caused considerably more user interaction. We further added additional interactivity with so-called Quick Time Events (QTEs). In these events, the player has a limited amount of time to perform a specific action after a corresponding prompt appeares on screen. Different versions of QTEs were implemented using Full Body Tracking with Microsoft Kinect, and were tested with another 18 participants during a two player scenario. We found that Full Body Gestures were easier to perform and, in general, preferred to controlling a cursor with one hand and hitting buttons with it.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.