This paper introduces a new multi-lateral filter to fuse lowresolution depth maps with high-resolution images. The goal is to enhance the resolution of Time-of-Flight sensors and, at the same time, reduce the noise level in depth measurements. Our approach is based on the joint bilateral upsampling, extended by a new factor that considers the low reliability of depth measurements along the low-resolution depth map edges. Our experimental results show better performances than alternative depth enhancing data fusion techniques.
We present an adaptive multi-lateral filter for real-time low-resolution depth map enhancement. Despite the great advantages of Time-of-Flight cameras in 3-D sensing, there are two main drawbacks that restricts their use in a wide range of applications; namely, their fairly low spatial resolution, compared to other 3-D sensing systems, and the high noise level within the depth measurements. We therefore propose a new data fusion method based upon a bilateral filter. The proposed filter is an extension the pixel weighted average strategy for depth sensor data fusion. It includes a new factor that allows to adaptively consider 2-D data or 3-D data as guidance information. Consequently, unwanted artefacts such as texture copying get almost entirely eliminated, outperforming alternative depth enhancement filters. In addition, our algorithm can be effectively and efficiently implemented for real-time applications.
Abstract. This paper presents a real-time refinement procedure for depth data acquired by RGB-D cameras. Data from RGB-D cameras suffers from undesired artifacts such as edge inaccuracies or holes due to occlusions or low object remission. In this work, we use recent depth enhancement filters intended for Time-of-Flight cameras, and extend them to structured light based depth cameras, such as the Kinect camera. Thus, given a depth map and its corresponding 2-D image, we correct the depth measurements by separately treating its undesired regions. To that end, we propose specific confidence maps to tackle areas in the scene that require a special treatment. Furthermore, in the case of filtering artifacts, we introduce the use of RGB images as guidance images as an alternative to real-time state-of-the-art fusion filters that use grayscale guidance images. Our experimental results show that the proposed fusion filter provides dense depth maps with corrected erroneous or invalid depth measurements and adjusted depth edges. In addition, we propose a mathematical formulation that enables to use the filter in real-time applications.
We present a full real-time implementation of a multilateral filtering system for depth sensor data fusion with 2-D data. For such a system to perform in real-time, it is necessary to have a real-time implementation of the filter, but also a real-time alignment of the data to be fused. To achieve an automatic data mapping, we express disparity as a function of the distance between the scene and the cameras, and simplify the matching procedure to a simple indexation procedure. Our experiments show that this implementation ensures the fusion of 3-D data and 2-D data in real-time and with high accuracy.
We propose an extension of our previous work on spatial domain Time-of-Flight (ToF) data enhancement to the temporal domain. Our goal is to generate enhanced depth maps at the same frame rate of the 2-D camera that, coupled with a ToF camera, constitutes a hybrid ToF multi-camera rig. To that end, we first estimate the motion between consecutive 2-D frames, and then use it to predict their corresponding depth maps. The enhanced depth maps result from the fusion between the recorded 2-D frames and the predicted depth maps by using our previous contribution on ToF data enhancement. The experimental results show that the proposed approach overcomes the ToF camera drawbacks; namely, low resolution in space and time and high level of noise within depth measurements, providing enhanced depth maps at video frame rate.Index Terms-Time of Flight, spatio-temporal data enhancement, sensor fusion, multimodal sensors.
Abstract. This paper presents a general refinement procedure that enhances any given depth map obtained by passive or active sensing. Given a depth map, either estimated by triangulation methods or directly provided by the sensing system, and its corresponding 2-D image, we correct the depth values by separately treating regions with undesired effects such as empty holes, texture copying or edge blurring due to homogeneous regions, occlusions, and shadowing. In this work, we use recent depth enhancement filters intended for Time-of-Flight cameras, and adapt them to alternative depth sensing modalities, both active using an RGB-D camera and passive using a dense stereo camera. To that end, we propose specific masks to tackle areas in the scene that require a special treatment. Our experimental results show that such areas are satisfactorily handled by replacing erroneous depth measurements with accurate ones.
This paper presents a novel approach to estimate the human pose from a body-scanned point cloud. To do so, a predefined skeleton model is first initialized according to both the skeleton base point and its torso limb obtained by Principal Component Analysis (PCA). Then, the body parts are iteratively clustered and the skeleton limb fitting is performed, based on Expectation Maximization (EM). The human pose is given by the location of each skeletal node in the fitted skeleton model. Experimental results show the ability of the method to estimate the human pose from multiple point cloud video sequences representing the external surface of a scanned human body; being robust, precise and handling large portions of missing data due to occlusions, acquisition hindrances or registration inaccuracies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.