Quadrotor drones equipped with high-quality cameras have rapidly raised as novel, cheap, and stable devices for filmmakers. While professional drone pilots can create aesthetically pleasing videos in short time, the smooth—and cinematographic—control of a camera drone remains challenging for most users, despite recent tools that either automate part of the process or enable the manual design of waypoints to create drone trajectories. This article moves a step further by offering high-level control of cinematographic drones for the specific task of framing dynamic targets. We propose techniques to automatically and interactively plan quadrotor drone motions in dynamic three-dimensional (3D) environments while satisfying both cinematographic and physical quadrotor constraints. We first propose the Drone Toric Space , a dedicated camera parameter space with embedded constraints, and derive some intuitive on-screen viewpoint manipulators. Second, we propose a dedicated path planning technique that ensures both that cinematographic properties can be enforced along the path and that the path is physically feasible by a quadrotor drone. At last, we build on the Drone Toric Space and the specific path planning technique to coordinate the motion of multiple drones around dynamic targets. A number of results demonstrate the interactive and automated capacities of our approaches on different use-cases.
A novel Empirical Mode Decomposition (EMD) algorithm, called 2T-EMD, for both mono- and multivariate signals is proposed in this paper. It differs from the other approaches by its computational lightness and its algorithmic simplicity. The method is essentially based on a redefinition of the signal mean envelope, computed thanks to new characteristic points, which offers the possibility to decompose multivariate signals without any projection. The scope of application of the novel algorithm is specified, and a comparison of the 2T-EMD technique with classical methods is performed on various simulated mono- and multivariate signals. The monovariate behaviour of the proposed method on noisy signals is then validated by decomposing a fractional Gaussian noise and an application to real life EEG data is finally presented.
empirical mode decomposition and application to multichannel filtering. Signal Processing, Elsevier, 2011, 91 (12) AbstractEmpirical Mode Decomposition (EMD) is an emerging topic in signal processing research, applied in various practical fields due in particular to its data-driven filter bank properties. In this paper, a novel EMD approach called X-EMD (eXtended-EMD) is proposed, which allows for a straightforward decomposition of mono-and multivariate signals without any change in the core of the algorithm. Qualitative results illustrate the good behavior of the proposed algorithm whatever the signal dimension is. Moreover, a comparative study of X-EMD with classical mono-and multivariate methods is presented and shows its competitiveness. Besides, we show that X-EMD extends the filter bank properties enjoyed by monovariate EMD to the case of multivariate EMD. Finally, a practical application on multi-channel sleep recording is presented.
This work aims at enhancing a classical video viewing experience by introducing realistic haptic feelings in a consumer environment. More precisely, a complete framework to both produce and render the motion embedded in an audiovisual content is proposed to enhance a natural movie viewing session. We especially consider the case of a first-person point of view audiovisual content and we propose a general workflow to address this problem. This latter includes a novel approach to both capture the motion and video of the scene of interest, together with a haptic rendering system for generating a sensation of motion. A complete methodology to evaluate the relevance of our framework is finally proposed and demonstrates the interest of our approach.
This article introduces the ISO/IEC MPEG Immersive Video (MIV) standard, MPEG-I Part 12, which is undergoing standardization. The draft MIV standard provides support for viewing immersive volumetric content captured by multiple cameras with six degrees of freedom (6DoF) within a viewing space that is determined by the camera arrangement in the capture rig. The bitstream format and decoding processes of the draft specification along with aspects of the Test Model for Immersive Video (TMIV) reference software encoder, decoder, and renderer are described. The use cases, test conditions, quality assessment methods, and experimental results are provided. In the TMIV, multiple texture and geometry views are coded as atlases of patches using a legacy 2-D video codec, while optimizing for bitrate, pixel rate, and quality. The design of the bitstream format and decoder is based on the visual volumetric video-based coding (V3C) and video-based point cloud compression (V-PCC) standard, MPEG-I Part 5.
Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation, and virtual reality. Today there is a growing interest among researchers in integrating haptic feedback into audiovisual systems. A new medium emerges from this effort: haptic-audiovisual (HAV) content. This paper presents the techniques, formalisms, and key results pertinent to this medium. We first review the three main stages of the HAV workflow: the production, distribution, and rendering of haptic effects. We then highlight the pressing necessity for evaluation techniques in this context and discuss the key challenges in the field. By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial and societal stakes are significant.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.