In this article we present a multichannel animation system for producing utterances signed in French Sign Language (LSF) by a virtual character. The main challenges of such a system are simultaneously capturing data for the entire body, including the movements of the torso, hands, and face, and developing a data-driven animation engine that takes into account the expressive characteristics of signed languages. Our approach consists of decomposing motion along different channels, representing the body parts that correspond to the linguistic components of signed languages. We show the ability of this animation system to create novel utterances in LSF, and present an evaluation by target users which highlights the importance of the respective body parts in the production of signs. We validate our framework by testing the believability and intelligibility of our virtual signer.
In this paper, we assessed the efficacy of different types of visual information for improving the execution of the roundoff movement in gymnastics. Specifically, two types of 3D feedback were compared to a 3D visualization only displaying the movement of the expert (observation) as well as to a more 'traditional' video observation. The improvement in movement execution was measured using different methods, namely subjective evaluations performed by official judges, and more 'quantitative appraisals based on time series analyses. Video demonstration providing information about the expert and 3D feedback (i.e., using 3D representation of the movement in monoscopic vision) combining information about the movement of the expert and the movement of the learner were the two types of feedback giving rise to the best improvement of movement execution, as subjectively evaluated by judges. Much less conclusive results were obtained when assessing movement execution using quantification methods based on time series analysis. Correlation analyses showed that the subjective evaluation performed by the judges can hardly be predicted/ explained by the 'more objective' results of time series analyses. ☆ Fully documented templates are available in the elsarticle package on CTAN ⁎ Corresponding author.
Abstract. Over the past decade, many fields of discovery have begun to use motion capture data, leading to an exponential growth in the size of motion databases. Querying, indexing and retrieving motion capture data has thus become a crucial problem for the accessibility and usability of such databases. Our aim is to make this approach feasible for virtual agents signing in French Sign Language, taking into account the semantic information implicitly contained in sign language data. We propose a new methodology for accessing our database, by simultaneously using both a semantic and a captured-motion database, with different ways of indexing the two database parts. This approach is used to effectively retrieve stored motions for the purposes of producing real-time sign language animations. The complete process and its in-use efficiency are described, from querying motion in the semantic database to computing transitory segments between signs, and producing animations of a realistic virtual character.
Motion editing requires the preservation of spatial and temporal information of the motion. During editing, this information should be preserved at best. We propose a new representation of the motion based on the Laplacian expression of a 3D+t graph: the set of connected graphs given by the skeleton over time. Through this Laplacian representation of the motion, we propose an application which allows an easy and interactive editing, correction or retargeting of a motion. The new created motion is the result of the combination of two minimizations, linear and non-linear: the first penalizes the difference of energy between the Laplacian coordinates from an animation to the desired one. The other one preserves the length of segments. Using several examples, we demonstrate the benefits of our method and in particularly the preservation of the spatiotemporal properties of the motion in an interactive context.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.