Abstract. Although manipulating 3D virtual models with mid-air hand gestures had the benefits of natural interactions and free from the sanitation problems of touch surfaces, many factors could influence the usability of such an interaction paradigm. In this research, the authors conducted experiments to study the visionbased mid-air hand gestures for scaling, translating, and rotating a 3D virtual car displayed on a large screen. An Intel RealSense 3D Camera was employed for hand gesture recognition. The two-hand gesture with grabbing then moving apart/close to each other was applied to enlarging/shrinking the 3D virtual car. The one-hand gesture with grabbing then moving was applied to translating a car component. The two-hand gesture with grabbing and moving relatively along the circumference of a horizontal circle was applied to rotating the car. Seventeen graduate students were invited to participate in the experiments and offer their evaluations and comments for gesture usability. The results indicated that the width and depth of detection ranges were the key usability factors for two-hand gestures with linear motions. For dynamic gestures with quick transitions and motions from open to close hand poses, ensuring gesture recognition robustness was extremely important. Furthermore, given a gesture with ergonomic postures, inappropriate control-response ratio could result in fatigue due to repetitive exertions of hand gestures for achieving the precise controls of 3D model manipulation tasks.
In order to reduce switching attention and increase the performance and pleasure of mobile learning in heritage temples, the objective of this research was to employ the technology of Augmented Reality (AR) on the user interfaces of mobile devices. Based on field study and literature review, three user interface prototypes were constructed. They both offered two service modes but differed in the location of navigation bars and text display approaches. The results of experiment showed that users preferred animated and interactive virtual objects or characters with sound effects. In addition, transparent background of images and text message boxes were better. The superimposed information should not cover more than thirty percents of the screen so that users could still see the background clearly.
Recently, the technology of mid-air gestures for manipulating 3D digital contents has become an important research issue. In order to conform to the needs of users and contexts, eliciting user-defined gestures is inevitable. However, it was reported that user-defined hand gestures tended to vary significantly in posture, motion and speed, making it difficult to identify common characteristics. In this research, the authors conducted an experiment to study the intuitive hand gestures for controlling the rotation of 3D digital furniture. Twenty graduate students majored in Industrial Design were invited to participate in the task. Although there were great varieties among different participants, common characteristics were extracted through systematic behavior coding and analysis. The results indicated that open palm and D Handshape (American Sign Language) were the most intuitive hand poses. In addition, moving hands along the circumference of a horizontal circle was the most intuitive hand motion and trajectory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.