Two crucial aspects of visual point tracking are addressed in this paper. First, the algorithm should track as many points as possible reliably. Second, the computation should be fast enough, which is challenging on low power embedded platforms. We propose a new multi-scale semi dense point tracker called Video Extruder, whose purpose is to fill the gap between short term, dense motion estimation (optical flow) and long term, sparse salient point tracking. This paper presents a new detector, including a new salience function with low computational complexity and a new selection strategy that allows to obtain a large number of keypoints. Its density and reliability in mobile video scenarios are compared with those of FAST detector. Then, a multi-scale prediction and a matching strategy are presented, based on a hybrid regional coarse-to-fine and temporal prediction, which provides robustness to large camera and object accelerations. Filtering and merging strategies are then used to eliminate most of the wrong or useless trajectories. Thanks to its high degree of parallelism, the proposed algorithm extracts beams of trajectories from the video in a very fast way. We compare it with the state-of-the-art pyramidal Lucas Kanade point tracker and show that, in fast mobile video scenarios, it yields similar quality results, while being up to one order of magnitude faster. Three different parallel implementations of this tracker are presented, including multi-core CPU, GPU and ARM SoCs. On a commodity 2010 CPU, it can track 8 500 points in a 640 × 480 video at 150 Hz.