It is well known that video material with a static background allows easier segmentation than that with a moving background. One approach to segmentation of sequences with a moving background is to use preprocessing to create a static background, after which conventional background subtraction techniques can be used for segmenting foreground objects. It has been recently shown that global motion estimation and/or background sprite generation techniques are reliable. We propose a new background modeling technique for object segmentation using local background sprite generation. Experimental results show the excellent performance of this new method compared to recent algorithms proposed.
In many computer vision applications local optical flow methods are still a widely used. Such methods, like the Pyramidal Lucas Kanade and the Robust Local Optical Flow, have to address the trade-off between run time and accuracy. In this work we propose an extension to these methods that improves the accuracy especially at object boundaries. This extension makes use of the cross based variable support region generation proposed in [1] accounting for local intensity discontinuities. In the evaluation using Middlebury data set we prove the ability of the proposed extension to increase the accuracy by a slight increase of run time.
3D-reconstructions produced by active 3D-scanning systems based on structured light can achieve high accuracy reconstructions of the scene surfaces. Structured light algorithms based on phase measuring triangulation (PMT) utilize phaseshifted sinusoidal patterns projected into the scene for a precise determination of correspondencies. The number of patterns used for that purpose may vary depending on the design of the algorithm.No matter how many patterns are required, all of these algorithms suffer from the acquisition time needed to record all patterns sequentially. In case of a dynamic scene the sequential acquisition of images lead to the capture of dynamic objects in different poses which in turn result in erroneous reconstructions depending on the object's velocity. Our goal is to achieve a more robust result during dynamic scene capture as well as better scene reconstruction rate. Two novel approaches are presented to reduce the amount of required patterns for a highaccuracy 3D-reconstruction. This is achieved by incorporating passive matching techniques in the phase-unwrapping stage of the algorithm, allowing to drop one half of the sinusoidal patterns.
Estimating depth from a video sequence is still a challenging task in computer vision with numerous applications. Like other authors we utilize two major concepts developed in this field to achieve that task which are the hierarchical estimation of depth within an image pyramid as well as the fusion of depth maps from different views. We compare the application of various local matching methods within such a combined approach and can show the relative performance of local image guided methods in contrast to commonly used fixed-window aggregation. Since efficient implementations of these image guided methods exist and the available hardware is rapidly enhanced, the disadvantage of their more complex but also parallel computation vanishes and they will become feasible for more applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.