In this paper we present a review of computable descriptors of human motion. We first present low-level descriptors that compute quantities directly from the raw motion data. We then present higher level descriptors that use low-level ones to compute boolean, single value or continuous quantities that can be interpreted, automatically or manually, to qualify the meaning, style or expressiveness of a motion. We provide formulas inspired from the state of the art that can be applied to 3D motion capture data.
Figure 1: A synthetic image where our proposed BRDF model is used on a fluorescent orange surface that is being illuminated by several collimated monochrome light sources. The scene geometry is similar to that shown in figure 2. Note the colours of the directly viewed bright dots on the material itself, and the in some cases considerably different colours seen in the reflection patterns. It is noteworthy that the blue and green monochrome lights (second and third light from the left), which fall into the main area of the absorption curve shown in figure 4, exhibit the largest colour discrepancies between specular and diffuse reflection. AbstractFluorescence is an interesting and visually prominent effect, which has not been fully covered by Computer Graphics research so far.While the physical phenomenon of fluorescence has been addressed in isolation, the actual reflection behaviour of real fluorescent surfaces has never been documented, and no analytical BRDF models for such surfaces have been published yet.This paper aims to illustrate the reflection properties typical for diffuse fluorescent surfaces, and provides a BRDF model based on a layered microfacet approach that mimics them.
Brittany (a) (b) (c) (d) (e) (a') (b') (c') (d') (e') Figure 1: Clusters and corresponding marker sets automatically determined by applying our K-means clustering algorithm (with K = 30 clusters) on the range-of-motion sequences of actors A (a,a'), B (b,b'), C (c,c'), D(d,d') and combined sequences of actors B+C+D (e,e').
A sign language utterance can be seen as a continuous stream of motion, involving the signs themselves and inter-sign movements or transitions. Like in speech, coarticulation constitutes an important part of the language. Indeed, the signs are contextualized: their form and, most of all, the transitions will greatly depend on the surrounding signs. For that reason, the manual segmentation of sign language utterances is a difficult and imprecise task. Besides, annotators often assume that both hands are synchronous, which is not always true in practice. In this paper, we first propose a technique to automatically refine the segmentation by adjusting the manual tags isolating signs from transitions. We then study motion transitions between consecutive signs and, in particular, the duration of those transitions. We propose several computation techniques for the transition duration based on the analysis we have conducted. Finally, we use our findings in our motion synthesis platform to create new utterances in French Sign Language.
Abstract. While human communication involves rich, complex and expressive gestures, available corpora of captured motions used for the animation of virtual characters contain actions ranging from locomotion to everyday life motions. We aim at creating a novel corpus of expressive and meaningful gestures, and we focus on body movements and gestures involved in theatrical scenarios. In this paper we propose a methodology for building a corpus of full-body theatrical gestures based on a magician show enriched with affective content. We then validate the constructed corpus of theatrical gestures and sequences through several perceptual studies focusing on the complexity of the produced movements as well as the recognizability of the additional affective content.
Existing work on the animation of signing avatars often relies on pure procedural techniques or on the playback of Motion Capture (MoCap) data. While the first solution results in robotic and unnatural motions, the second one is very limited in the number of signs that it can produce. In this paper, we propose to implement data-driven motion synthesis techniques to increase the variety of Sign Language (SL) motions that can be made from a limited database. In order to generate new signs and inflection mechanisms based on an annotated French Sign Language MoCap corpus, we rely on phonological recombination, i.e. on the motion retrieval and modular reconstruction of SL content at a phonological level with a particular focus on three phonological components of SL: hand placement, hand configuration and hand movement. We propose to modify the values taken by those components in different signs to create their inflected version or completely new signs by (i) applying motion retrieval at a phonological level to exchange the value of one component without any modification, (ii) editing the retrieved data with different operators, or, (iii) using conventional motion generation techniques such as interpolation or inverse kinematics, which are parameterized to comply to the kinematic properties of real motion observed in the data set. The quality of the synthesized motions is perceptually assessed through two distinct evaluations that involved 75 and 53 participants respectively.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.