SUMMARYSubspace representation model is an important subset of visual tracking algorithms. Compared with models performed on the original data space, subspace representation model can effectively reduce the computational complexity, and filter out high dimensional noises. However, for some complicated situations, e.g., dramatic illumination changing, large area of occlusion and abrupt object drifting, traditional subspace representation models may fail to handle the visual tracking task. In this paper, we propose a novel subspace representation algorithm for robust visual tracking by using low-rank representation with graph constraints (LRGC). Low-rank representation has been well-known for its superiority of handling corrupted samples, and graph constraint is flexible to characterize sample relationship. In this paper, we aim to exploit benefits from both low-rank representation and graph constraint, and deploy it to handle challenging visual tracking problems. Specifically, we first propose a novel graph structure to characterize the relationship of target object in different observation states. Then we learn a subspace by jointly optimizing lowrank representation and graph embedding in a unified framework. Finally, the learned subspace is embedded into a Bayesian inference framework by using the dynamical model and the observation model. Experiments on several video benchmarks demonstrate that our algorithm performs better than traditional ones, especially in dynamically changing and drifting situations.