In this paper, we present a novel framework for combining independent on-line trackers using visual scene context. The aim of our method is to decide automatically at each point in time which specific tracking algorithm works best under the given scene or acquisition conditions.In the literature, many ways of combining, fusing or selecting visual features have been presented. For example, low-level fusion of features (like motion or shape) is applied to improve the foreground-background discrimination (e.g. [2, 8]). Fusion is also possible at a higher level, where several trackers are run in parallel in order to select or combine their respective results (e.g. [1,5]). In terms of model or feature fusion, our previous work Moujtahid et al.[6] concentrated on using confidence values of several individual trackers with different visual features coupled with a spatial-temporal coherence criteria to select the most suitable tracker at a given instant and enforce the continuity of tracking.The main idea behind our framework is to use the strengths of different tracking algorithms as well as scene context information in order to improve the tracking performance. To this end, we introduce a framework that combines several independent and complementary trackers, each specialised on different scene conditions. The decision on which tracker to select is proposed by an off-line trained classifier which, in turn, is based on general scene context features that are independent from the trackers.The general procedure of the proposed tracking framework is illustrated in Fig. 1. On a given video, N independent trackers T n , (n ∈ 1..N) run in parallel and, at every frame t, produce each an estimate of the object's state. This is usually a bounding box B n t with an associated confidence value c t,n . The objective is to select at each frame the best tracker, i.e. the one that outputs the bounding box that fits best the object to track.At the same time, the scene context features f t are extracted. These features correspond to first and second order statistics of a given imagerelated variable such as intensity, colour and motion. They are computed on different image regions giving local, global and differential values.The scene features f t are concatenated with additional measures like the trackers confidences values c t and the identifier of the last selected tracker s t−1 to form a large feature vector i t . An N-class classifier, that has been trained off-line on annotated data, is then applied on these features to estimate the best tracker for the given scene context. The classifier responds with y t , a probability for each class which is subsequently filtered by a Hidden Markov Model to ensure the temporal continuity of the tracker selection and reject outliers. The HMM estimates the posterior probability distribution x t , which is used to select the best tracker.Finally, a Kalman Filter is applied as a post-processing step to temporally smooth the resulting object bounding box B s t from the selected trackers T s . The r...
Lane information is essential for safe autonomous driving. In this article, we present a multisensor fusion framework for ego and adjacent lanes with a novel fusion quality measure and dynamic lane mode strategies for erroneous management. The framework fuses road marking lines based on Dempster-Shafer theory and tracks lanes with a particle filter. Then, a quality measure for each line is computed, integrating sensor coherence, availability as well as temporal continuity. This quality is essential to deploy different lane management strategies in order to avoid integrating erroneous data. The proposed framework was evaluated in a lateral control architecture with autonomous driving on open roads and proved its robustness and availability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.