We propose a technique that propagates information forward through video data. The method is conceptually simple and can be applied to tasks that require the propagation of structured information, such as semantic labels, based on video content. We propose a Video Propagation Network that processes video frames in an adaptive manner. The model is applied online: it propagates information forward without the need to access future frames. In particular we combine two components, a temporal bilateral network for dense and video adaptive filtering, followed by a spatial network to refine features and increased flexibility. We present experiments on video object segmentation and semantic video segmentation and show increased performance comparing to the best previous task-specific methods, while having favorable runtime. Additionally we demonstrate our approach on an example regression task of color propagation in a grayscale video. arXiv:1612.05478v3 [cs.CV] 11 Apr 2017 processing: General applicability: VPNs can be used to propagate any type of information content i.e., both discrete (e.g., semantic labels) and continuous (e.g., color) information across video frames. Online propagation: The method needs no future frames and can be used for online video analysis. Long-range and image adaptive: VPNs can efficiently handle a large number of input frames and are adaptive to the video with long-range pixel connections. End-to-end trainable: VPNs can be trained end-to-end, so they can be used in other deep network architectures. Favorable runtime: VPNs have favorable runtime in comparison to many current best methods, what makes them amenable for learning with large datasets.Empirically we show that VPNs, despite being generic, perform better than published approaches on video object segmentation and semantic label propagation while being faster. VPNs can easily be integrated into sequential perframe approaches and require only a small fine-tuning step that can be performed separately.