Conventional approaches to image de-fencing have limited themselves to using only image data in adjacent frames of the captured video of an approximately static scene. In this work, we present a method to harness disparity using a stereo pair of fenced images in order to detect fence pixels. Tourists and amateur photographers commonly carry smartphones/phablets which can be used to capture a short video sequence of the fenced scene. We model the formation of the occluded frames in the captured video. Furthermore, we propose an optimization framework to estimate the de-fenced image using the total variation prior to regularize the ill-posed problem.
Conventiona approaches to image de-fencing use multiple adjacent frames for segmentation of fences in the reference image and are limited to restoring images of static scenes only. In this paper, we propose a de-fencing algorithm for images of dynamic scenes using an occlusionaware optical flow method. We divide the problem of image de-fencing into the tasks of automated fence segmentation from a single image, motion estimation under known occlusions and fusion of data from multiple frames of a captured video of the scene. Specifically, we use a pre-trained convolutional neural network to segment fence pixels from a single image. The knowledge of spatial locations of fences is used to subsequently estimate optical flow in the occluded frames of the video for the final data fusion step. We cast the fence removal problem in an optimization framework by modeling the formation of the degraded observations. The inverse problem is solved using fast iterative shrinkage thresholding algorithm (FISTA). Experimental results show the effectiveness of proposed algorithm.
In recent times, the availability of inexpensive image capturing devices such as smartphones/tablets has led to an exponential increase in the number of images/videos captured. However, sometimes the amateur photographer is hindered by fences in the scene which have to be removed after the image has been captured. Conventional approaches to image de-fencing suffer from inaccurate and non-robust fence detection apart from being limited to processing images of only static occluded scenes. In this paper, we propose a semi-automated de-fencing algorithm using a video of the dynamic scene. We use convolutional neural networks for detecting fence pixels. We provide qualitative as well as quantitative comparison results with existing lattice detection algorithms on the existing PSU NRT data set [1] and a proposed challenging fenced image dataset. The inverse problem of fence removal is solved using split Bregman technique assuming total variation of the de-fenced image as the regularization constraint.
Low cost RGB-D sensors such as the Microsoft Kinect have enabled the use of depth data along with color images. In this work, we propose a multi-modal approach to address the problem of removal of fences/occlusions from images captured using a Kinect camera. We also perform depth completion by fusing data from multiple recorded depth maps affected by occlusions. The availability of aligned image and depth data from Kinect aids us in the detection of the fence locations. However, accurate estimation of the relative shifts between the captured color frames is necessary. Initially, for the case of static scene elements with simple relative motion between the camera and the objects, we propose the use of affine scale-invariant feature transform descriptor (ASIFT) to compute the relative global displacements. We also address the scenario wherein the relative motion between the frames may not be global using the depth map obtained by Kinect. For such a scenario involving complex motion of scene pixels, we use a recently proposed robust optical flow technique. We show results for challenging real-world data wherein the scene is dynamic. The inverse ill-posed problems of estimation of the de-fenced image and the inpainted depth map are solved using an optimization-based framework. Specifically, we model the unoccluded image and the completed depth map as two distinct Markov random fields, respectively, and obtain their maximum a-posteriori estimates using loopy belief propagation.
The advent of inexpensive smartphones/tablets/phablets equipped with cameras has resulted in the average person capturing cherished moments as images/videos and sharing them on the internet. However, at several locations, an amateur photographer may be frustrated with the captured images. For example, the object of interest to the photographer might be occluded or fenced. Currently available image de-fencing methods in the literature are limited by non-robust fence detection and can handle only static occluded scenes whose video is captured by constrained camera motion. In this work, we propose an algorithm to obtain a de-fenced image using a few frames from a video of the occluded static or dynamic scene. We also present a new fenced image database captured under challenging scenarios such as clutter, poor lighting, viewpoint distortion, etc. Initially, we propose a supervised learning-based approach to detect fence pixels and validate its performance with qualitative as well as quantitative results. We rely on the idea that freehand panning of the fenced scene is likely to render visible hidden pixels of the reference frame in other frames of the captured video. Our approach necessitates the solution of three problems: (i) detection of spatial locations of fences/occlusions in the frames of the video, (ii) estimation of relative motion between the observations, and (iii) data fusion to fill in occluded pixels in the reference image. We assume the de-fenced image as a Markov random field and obtain its maximum a posteriori estimate by solving the corresponding inverse problem. Several experiments on synthetic and real-world data demonstrate the effectiveness of the proposed approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.