From a highly distributed timed automata specification, the paper analyses an implementation in the form of a looping controller, launching possibly many tasks in each cycle. Qualitative and quantitative constraints are distinguished on the specification to allow such an implementation, and the analysis of the semantic differences between the specification and the implementation leads to define an overapproximating model. The implementation is then "sandwiched" between the original specification and the new model, allowing to check if the important properties of the specification are preserved by the implementation.
The purpose of this paper is to describe human motions and emotions that appear on real video images with compact and informative representations. We aimed to recognize expressive motions and analyze the relationship between human body features and emotions. We propose a new descriptor vector for expressive human motions inspired from the Laban Movement Analysis method (LMA), a descriptive language with an underlying semantics that allows to qualify human motion in its different aspects. The proposed descriptor is fed into a machine learning framework including, Random Decision Forest, Multi-Layer Perceptron and two multiclass Support Vector Machines methods. We evaluated our descriptor first for motion recognition and second for emotion recognition from the analysis of expressive body movements. Preliminary experiments with three public datasets, MSRC-12, MSR Action 3D and UTkinect showed that our model performs better than many existing motion recognition methods. We also built a dataset composed of 10 control motions (move, turn left, turn right, stop, sit down, wave, dance, introduce yourself, increase velocity, decrease velocity). We tested our descriptor vector and
Interactive robotics is a vast and expanding research field. Interactions must be sufficiently natural, with robots having socially acceptable behavior by humans, adaptable to user expectations. Thus allowing easy integration in our daily lives in various fields (science, industry, health ...). Natural interaction during human-robot collaborative action needs suitable interaction techniques. In our paper we develop an online gesture recognition system for natural and intuitive communication between Human and NAO robot. However recognizing meaningful gesture patterns from whole-body gestures is a complex task. That is why we used the Laban Movement Analysis technique to describe high level gestures for NAO teleoperation. The major contributions of the present work is: (1) an efficient preprocessing step based on view invariant human motion, (2) a robust descriptor vector based on Laban Movement Analysis technique to generate compact and informative representations of Human movement, and (3) an online gesture recognition with Hidden Markov Model method was applied to teleoperate NAO based on our proper data base dedicated to the teleoperation of NAO. Our approach was evaluated with two challenging datasets, Microsoft Research Cambridge-12 (MSRC-12) and UTKinect-Action. Experimental results show that our approach outperforms the state-of-the-art methods.
The visionary objective of this work is to "open to people connected to the internet, an access to ocean depths anytime, anywhere." Today these people can just perceive the changing surface of the sea from the shores, but ignore almost everything on what is hidden. If they could explore seabed and become knowledgeable, they would get eventually involved in finding alternative solutions for our vital terrestrial problems -pollution, climate changes, and destruction of biodiversity and exhaustion of Earth resources. The introduction of Mixed Reality and Internet in aquatic activities constitutes a technological rupture when compared with the status of existing related technologies. Through Internet, anyone, anywhere, at any moment will be naturally able to dive in real-time using a Remote Operated Vehicle (ROV) in the most remarkable sites around the world. The heart of this work is focused on Mixed Reality. The main challenge is to reach real time display of digital video stream to web users, by mixing 3D entities (objects or pre-processed underwater terrain surfaces), with 2D videos of live images collected in real time by a teleoperated ROV.
Camera pose estimation from video images is a fundamental problem in machine vision and Augmented Reality (AR) systems. Most developed solutions are either linear for both n points and n lines, or iterative depending on nonlinear optimization of some geometric constraints. In this paper, we first survey several existing methods and compare their performances in an AR context. Then, we present a new linear algorithm which is based on square fiducials localisation technique to give a closed-form solution to the pose estimation problem, free of any initialization. We propose also an hybrid technique which combines an iterative method, in fact the orthogonal iteration (OI) algorithm, with our own closed form solution. An evaluation of the methods has shown that this hybrid pose estimation technique is accurate and robust. Numerical experiments from real data are given comparing the performances of our hybrid method with several iterative techniques, and demonstrating the efficiency of our approach.
International audienceThis paper presents a paradigm of augmented reality haptic through an application enabling the interaction on real objects with a virtual tool. In order to interact within the real world, a real haptic probe is used so that the user feels the interaction. Furthermore, through the use of a visual partial reality removal process and a camera placed on the real scene, the real tool is visually hidden in the visual feedback and replaced by the virtual tool. Since, the real and virtual probes do not necessarily match, a model of the virtual tool is used to adjust and tune the haptic feedback, while at the same time the virtual tool is visually rendered according to the real measured forces by the haptic probe. Eventually, proposing a mixed painting application, the painting, applied on the real object, i.e. when the user comes in contact with this latter, is visually displayed such that its form is computed from the virtual tool geometry while its size and intensity from the real measured forces
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.