We propose a method for real-time motion planning with applications in aerial videography. Taking framing objectives, such as position of targets in the image plane as input, our method solves for robot trajectories and gimbal controls automatically and adapts plans in real-time due to changes in the environment. We contribute a real-time receding horizon planner that autonomously records scenes with moving targets, while optimizing for visibility to targets and ensuring collision-free trajectories. A modular cost function, based on the re-projection error of targets is proposed that allows for flexibility and artistic freedom and is well behaved under numerical optimization. We formulate the minimization problem under constraints as a finite horizon optimal control problem that fulfills aesthetic objectives, adheres to non-linear model constraints of the filming robot and collision constraints with static and dynamic obstacles and can be solved in real-time. We demonstrate the robustness and efficiency of the method with a number of challenging shots filmed in dynamic environments including those with moving obstacles and shots with multiple targets to be filmed simultaneously.
We propose a method for automated aerial videography in dynamic and cluttered environments. An online receding horizon optimization formulation facilitates the planning process for novices and experts alike. The algorithm takes high-level plans as input, which we dub virtual rails, alongside interactively defined aesthetic framing objectives and jointly solves for 3D quadcopter motion plans and associated velocities. The method generates control inputs subject to constraints of a non-linear quadrotor model and dynamic constraints imposed by actors moving in an a priori unknown way. The output plans are physically feasible, for the horizon length, and we apply the resulting control inputs directly at each time-step, without requiring a separate trajectory tracking algorithm. The online nature of the method enables incorporation of feedback into the planning and control loop, makes the algorithm robust to disturbances. Furthermore, we extend the method to include coordination between multiple drones to enable dynamic multi-view shots, typical for action sequences and live TV coverage. The algorithm runs in real-time on standard hardware and computes motion plans for several drones in the order of milliseconds. Finally, we evaluate the approach qualitatively with a number of challenging shots, involving multiple drones and actors and qualitatively characterize the computational performance experimentally.
This paper presents a distributed method for formation control of a homogeneous team of aerial or ground mobile robots navigating in environments with static and dynamic obstacles. Each robot in the team has a finite communication and visibility radius and shares information with its neighbors to coordinate. Our approach leverages both constrained optimization and multi-robot consensus to compute the parameters of the multi-robot formation. This ensures that the robots make progress and avoid collisions with static and moving obstacles. In particular, via distributed consensus, the robots compute (a) the convex hull of the robot positions, (b) the desired direction of movement and (c) a large convex region embedded in the four dimensional position-time free space. The robots then compute, via sequential convex programming, the locally optimal parameters for the formation to remain within the convex neighborhood of the robots. The method allows for reconfiguration. Each robot then navigates towards its assigned position in the target collision-free formation via an individual controller that accounts for its dynamics. This approach is efficient and scalable with the number of robots. We present an extensive evaluation of the communication requirements and verify the method in simulations with up to sixteen quadrotors. Lastly, we present experiments with four real quadrotors flying in formation in an environment with one moving human.
Some aerial tasks are achieved more efficiently and at a lower cost by a group of independently controlled micro aerial vehicles (MAVs) when compared to a single, more sophisticated robot. Controlling formation flight can be cast as a two-level problem: stabilization of relative distances of agents (formation shape control) and control of the center of gravity of the formation. To date, accurate shape control of a formation of MAVs usually relies on external tracking devices (e.g. fixed cameras) or signals (e.g. GPS) and uses centralized control, which severely limits its deployment. In this paper, we present an environment-independent approach for relative MAV formation flight, using a distributed control algorithm which relies only on embedded sensing and agentto-agent communication. In particular, an on-board monocular camera is used to acquire relative distance measurements in combination with a consensus-based distributed Kalman filter. We evaluate our methods in-and outdoors with a formation of three MAVs while controlling the formation's center of gravity manually.
We propose an approach to capture subjective first-person view (FPV) videos by drones for automated cinematography. FPV shots are intentionally not smooth to increase the level of immersion for the audience, and are usually captured by a walking camera operator holding traditional camera equipment. Our goal is to automatically control a drone in such a way that it imitates the motion dynamics of a walking camera operator, and, in turn, capture FPV videos. For this, given a user-defined camera path, orientation, and velocity, we first present a method to automatically generate the operator’s motion pattern and the associated motion of the camera, considering the damping mechanism of the camera equipment. Second, we propose a general computational approach that generates the drone commands to imitate the desired motion pattern. We express this task as a constrained optimization problem, where we aim to fulfill high-level user-defined goals, while imitating the dynamics of the walking camera operator and taking the drone’s physical constraints into account. Our approach is fully automatic, runs in real time, and is interactive, which provides artistic freedom in designing shots. It does not require a motion capture system, and works both indoors and outdoors. The validity of our approach has been confirmed via quantitative and qualitative evaluations.
In this paper we propose an algorithm for the training of neural network control policies for quadrotors. The learned control policy computes control commands directly from sensor inputs and is hence computationally efficient. An imitation learning algorithm produces a policy that reproduces the behavior of a path following control algorithm with collision avoidance. Due to the generalization ability of neural networks, the resulting policy performs local collision avoidance of unseen obstacles while following a global reference path. The algorithm uses a time-free model predictive path-following controller as a supervisor. The controller generates demonstrations by following few example paths. This enables an easy to implement learning algorithm that is robust to errors of the model used in the model predictive controller. The policy is trained on the real quadrotor, which requires collision-free exploration around the example path. An adapted version of the supervisor is used to enable exploration. Thus, the policy can be trained from a relatively small number of examples on the real quadrotor, making the training sample efficient.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.