1 2 3 4 5 7 6The Toric space model For computing viewpoints For manipulating viewpoints For interpolating viewpoints Figure 1: We present (a) the Toric space, a novel and compact representation for intuitive and efficient virtual camera control. We demonstrate the potential of this representation by proposing (b) an efficient automated viewpoint computation technique, (c) a novel and intuitive screenspace manipulation tool, and (d) an effective viewpoint interpolation technique. AbstractA large range of computer graphics applications such as data visualization or virtual movie production require users to position and move viewpoints in 3D scenes to effectively convey visual information or tell stories. The desired viewpoints and camera paths are required to satisfy a number of visual properties (e.g. size, vantage angle, visibility, and on-screen position of targets). Yet, existing camera manipulation tools only provide limited interaction methods and automated techniques remain computationally expensive.In this work, we introduce the Toric space, a novel and compact representation for intuitive and efficient virtual camera control. We first show how visual properties are expressed in this Toric space and propose an efficient interval-based search technique for automated viewpoint computation. We then derive a novel screen-space manipulation technique that provides intuitive and real-time control of visual properties. Finally, we propose an effective viewpoint interpolation technique which ensures the continuity of visual properties along the generated paths. The proposed approach (i) performs better than existing automated viewpoint computation techniques in terms of speed and precision, (ii) provides a screen-space manipulation tool that is more efficient than classical manipulators and easier to use for beginners, and (iii) enables the creation of complex camera motions such as long takes in a very short time and in a controllable way. As a result, the approach should quickly find its place in a number of applications that require interactive or automated camera control such as 3D modelers, navigation tools or 3D games.
Quadrotor drones equipped with high-quality cameras have rapidly raised as novel, cheap, and stable devices for filmmakers. While professional drone pilots can create aesthetically pleasing videos in short time, the smooth—and cinematographic—control of a camera drone remains challenging for most users, despite recent tools that either automate part of the process or enable the manual design of waypoints to create drone trajectories. This article moves a step further by offering high-level control of cinematographic drones for the specific task of framing dynamic targets. We propose techniques to automatically and interactively plan quadrotor drone motions in dynamic three-dimensional (3D) environments while satisfying both cinematographic and physical quadrotor constraints. We first propose the Drone Toric Space , a dedicated camera parameter space with embedded constraints, and derive some intuitive on-screen viewpoint manipulators. Second, we propose a dedicated path planning technique that ensures both that cinematographic properties can be enforced along the path and that the path is physically feasible by a quadrotor drone. At last, we build on the Drone Toric Space and the specific path planning technique to coordinate the motion of multiple drones around dynamic targets. A number of results demonstrate the interactive and automated capacities of our approaches on different use-cases.
Abstract. Rapid increase in the quality of 3D content coupled with the evolution of hardware rendering techniques urges the development of camera control systems that enable the application of aesthetic rules and conventions from visual media such as film and television. One of the most important problems in cinematography is that of composition, the precise placement of elements in shot. Researchers already considered this problem, but mainly focused on basic compositional properties like size and framing. In this paper, we present a camera system that automatically configures the camera in order to satisfy advanced compositional rules. We have selected a number of those rules and specified rating functions for them, then using optimisation we find the best possible camera configuration. Finally, for better results, we use image processing methods to rate the satisfaction of rules in shot.
International audienceGenerating interactive narratives as movies requires knowledge in cinematography (camera placement, framing, lighting) and film editing (cutting between cameras). We present a framework for generat- ing a well-edited movie from interactively generated scene contents and cameras. Our system computes a sequence of shots by simultaneously choosing which camera to use, when to cut in and out of the shot, and which camera to cut to.La génération de contenus narratifs interactifs sous forme d'animation 3D nécessite des connaissances en cinématographie (placement des caméras, cadrage, éclairage) et en montage. Nous proposons une approche permettant de générer automatiquement des films cadrés et montés selon les règles de l'art à partir d'animations 3D et de caméras virtuelles créés dynamiquement. Notre système calcule la séquence des plans du montage final et les instants de coupe entre les plans
We present the Director's Lens, an intelligent interactive assistant for crafting virtual cinematography using a motiontracked hand-held device that can be aimed like a real camera. The system employs an intelligent cinematography engine that can compute, at the request of the filmmaker, a set of suitable camera placements for starting a shot. These suggestions represent semantically and cinematically distinct choices for visualizing the current narrative. In computing suggestions, the system considers established cinema conventions of continuity and composition along with the filmmaker's previous selected suggestions, and also his or her manually crafted camera compositions, by a machine learning component that adapts shot editing preferences from user-created camera edits. The result is a novel workflow based on interactive collaboration of human creativity with automated intelligence that enables efficient exploration of a wide range of cinematographic possibilities, and rapid production of computer-generated animated movies.
Figure 1: We introduce a set of high-level tools for filming dynamic targets with quadrotor drones. We first propose a specific camera parameter space (the Drone Toric space) together with on-screen viewpoint manipulators compatible with the physical constraints of a drone. We then propose a real-time path planning approach in dynamic environments which ensures both cinematographic properties in viewpoints along the path and feasibility of the path by a quadrotor drone (see green quadrotor). We also present a sketching tool that generates feasible trajectories from hand drawn input paths (see red quadrotor). Finally we propose to coordinate positions and motions of multiple drones around the dynamic targets to ensure the coverture of cinematographic distinct viewpoints (see blue quadrotors).
We describe an optimization-based approach for automatically creating well-edited movies from a 3D animation. While previous work has mostly focused on the problem of placing cameras to produce nice-looking views of the action, the problem of cutting and pasting shots from all available cameras has never been addressed extensively. In this paper, we review the main causes of editing errors in literature and propose an editing model relying on a minimization of such errors. We make a plausible semi-Markov assumption, resulting in a dynamic programming solution which is computationally efficient. We also show that our method can generate movies with different editing rhythms and validate the results through a user study. Combined with state-of-the-art cinematography, our approach therefore promises to significantly extend the expressiveness and naturalness of virtual movie-making.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.