This paper presents a dynamic motion control technique for human‐like articulated figures in a physically based character animation system. This method controls a figure such that the figure tracks input motion specified by a user. When environmental physical input such as an external force or a collision impulse are applied to the figure, this method generates dynamically changing motion in response to the physical input. We have introduced comfort and balance control to compute the angular acceleration of the figure's joints. Our algorithm controls the several parts of a human‐like articulated figure separetely through the minimum number of degrees‐of‐freedom. Using this approach, our algorithm simulates realistic human motions at efficient computational cost. Unlike existing dynamic simulation systems, our method assumes that input motion is already realistic, and is aimed at dynamically changing the input motion in real‐time only when unexpected physical input is applied to the figure. As such, our method works efficiently in the framework of current computer games.
Creating long motion sequences is a time-consuming task even when motion capture equipment or motion editing tools are used. In this paper, we propose a system for creating a long motion sequence by combining elementary motion clips. The user is asked to first input motions on a timeline. The system then automatically generates a continuous and natural motion. Our system employs four motion synthesis methods: motion transition, motion connection, motion adaptation, and motion composition. Based on the constraints between the feet of the animated character and the ground, and the timing of the input motions, the appropriate method is determined for each pair of overlapped or sequential motions. As the user changes the arrangement of the motion clips, the system interactively changes the output motion. Alternatively, the user can make the system execute an input motion as soon as possible so that it follows the previous motion smoothly. Using our system, users can make use of existing motion clips. Because the entire process is automatic, even novices can easily use our system. A prototype system demonstrates the effectiveness of our approach.
This paper presents a motion-capture-based control framework for third-person view virtual reality applications. Using motion capture devices, a user can directly control the full body motion of an avatar in virtual environments. In addition, using a thirdperson view, in which the user watches himself as an avatar on the screen, the user can sense his own movements and interactions with other characters and objects visually. However, there are still a few fundamental problems. First, it is difficult to realize physical interactions from the environment to the avatar. Second, it is also difficult for the user to walk around virtual environments because the motion capture area is very small compared to the virtual environments. This paper proposes a novel framework to solve these problems. We propose a tracking control framework in which the avatar is controlled so as to track input motion from a motion capture device as well as system generated motion. When an impact is applied to the avatar, the system finds an appropriate reactive motion and controls the weights of two tracking controllers in order to realize realistic and also controllable reactions. In addition, when the user walks in position, the system generates a walking motion for the controller to track. The walking speed and turn angle are also controlled through the user's walking gestures. Using our framework, the system generates seamless transitions between user controlled motions and system generated motions. In this paper, we also introduce a prototype application including a simplified optical motion capture system.
Abstract. In this paper, we propose an automatic learning method for gesture recognition. We combine two different pattern recognition techniques: the SelfOrganizing Map (SOM) and Support Vector Machine (SVM). First, we apply the SOM to divide the sample data into phases and construct a state machine. Next, we apply the SVM to learn the transition conditions between nodes. An independent SVM is constructed for each node. Of the various pattern recognition techniques for multi-dimensional data, the SOM is suitable for categorizing data into groups, and thus it is used in the first process. On the other hand, the SVM is suitable for partitioning the feature space into regions belonging to each class, and thus it is used in the second process. Our approach is unique and effective for multi-dimensional and time-varying gesture recognition. The proposed method is a general gesture recognition method that can handle any kinds of input data from any input device. In the experiment presented in this paper, we used two Nintendo Wii Remote controllers, with three-dimensional acceleration sensors, as input devices. The proposed method successfully learned the recognition models of several gestures.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.