Camera control is essential in both virtual and real-world environments. The quality of the camera's placement and motion may spell the difference between usability and confusion. Our work focuses on an instance of camera control called target following, and offers an algorithm for following multiple targets with unpredictable tra-jectories, among known obstacles. To the best of our knowledge, this work is the first attempting to address this important problem. 1 Our Method and Results In multi-targets following, the camera's primary objective is to follow and maximize visibility of multiple moving targets. For example , in video games, a third-person view camera may be controlled to follow a group of characters through complicated virtual environments. In robotics, a camera attached to robotic manipula-tors could also be controlled to observe live performers in a concert , monitor assembly of a mechanical system, or maintain task visibility during teleoperated surgical procedures. In general, it is difficult for a user to manually control the camera while also concentrating on other critical tasks. Therefore, it is desirable to have an autonomous camera system that handles the camera movement. The camera control problem has been studied extensively in both computer graphics and robotics (see a recent survey [Christie et al. 2008]) because of its broad applications. Unfortunately, many of these methods are not applicable to real-time multi-targets following among obstacles. The problem of building a camera for following the targets can be viewed as an online motion planning problem, in which the planner must be able to generate a trajectory for the camera and predict targets' motions in real time. In this poster, we present a sampling-based planner to robustly follow multiple targets among obstacles. We assume that the workspace is populated with known obstacles represented by polygons. These polygons are the projection of 3D objects that can potentially block the camera's view. This projection essential reduces our problem to a 2D workspace. We assume that, initially, the targets T are visible by the camera C, and, during the entire simulation, T exhibit certain degree of coherence (similar to a flock of bird). The targets are either controlled by the user or by another program, so the trajectories of the targets are not known in advance. However we assume that the maximum (linear) velocity of the target, v max T , is known. The current positions X T (t) of some targets T ⊂ T at time t are known only if T is in C's view range. The camera C also has bounded linear velocity v max C. The camera C's view range VC is defined as a tuple: VC = (θ, rnear, r f ar), where θ is the view angle, and rnear and r f ar define the near and far view range. The exact configuration of this view range at time t, denoted as VC (t), is defined by the tuple and the camera's location xC (t). The position xC of the camera is simply governed by the following linear equation: xC (t + t) = xC (t) + t · vC (t). The problem of target followi...