Vehicle re-identification is key to keep track of vehicles monitored by a multicamera network with non-overlapping views. In this paper, we propose a probabilistic framework based on a two-step strategy that re-identifies vehicles in road tunnels. The first step consists of splitting the re-identification problem by connecting groups of vehicles observed in different cameras using certain motion and appearance criteria. In the second step, we build a Bayesian model that finds the optimal assignment between vehicles of connected groups. Descriptors like trace transform signatures, lane change, and motion discrepancies are used to derive our probabilistic framework. Experimental tests reveal that connected groups derived from the first step are composed of 4 vehicles on average. This allow us to constrain the number of candidate matches and increase the chances of getting the correct match. In the second step, our Bayesian model succeeds in matching vehicles among candidates with very similar appearance and under uneven illumination conditions. In general, our system reports a reidentification accuracy of 92% using a nearest-neighbor matcher, and 98% using a one-to-one matcher. These results outperform previous works and encourage us to further develop our solution for other re-identification applications.
In this paper we present an efficient way to both compute and extract salient information from trace transform signatures to perform object identification tasks. We also present a feature selection analysis of the classical trace-transform functionals, which reveals that most of them retrieve redundant information causing misleading similarity measurements. In order to overcome this problem, we propose a set of functionals based on Laguerre polynomials that return orthonormal signatures between these functionals. In this way, each signature provides salient and non-correlated information that contributes to the description of an image object. The proposed functionals were tested considering a vehicle identification problem, outperforming the classical trace transform functionals in terms of computational complexity and identification rate.
This paper proposes a generic methodology for the semi-automatic generation of reliable position annotations for evaluating multi-camera people-trackers on large video data sets. Most of the annotation data are automatically computed, by estimating a consensus tracking result from multiple existing trackers and people detectors and classifying it as either reliable or not. A small subset of the data, composed of tracks with insufficient reliability, is verified by a human using a simple binary decision task, a process faster than marking the correct person position. The proposed framework is generic and can handle additional trackers. We present results on a data set of $sim 6$ h captured by 4 cameras, featuring a person in a holiday flat, performing activities such as walking, cooking, eating, cleaning, and watching TV. When aiming for a tracking accuracy of 60 cm, 80% of all video frames are automatically annotated. The annotations for the remaining 20% of the frames were added after human verification of an automatically selected subset of data. This involved $sim 2.4$ h of manual labor. According to a subsequent comprehensive visual inspection to judge the annotation procedure, we found 99% of the automatically annotated frames to be correct. We provide guidelines on how to apply the proposed methodology to new data sets. We also provide an exploratory study for the multi-target case, applied on the existing and new benchmark video sequences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.