Three-dimensional shape recovery is an important issue in the field of computer vision. Shape from Focus (SFF) is one of the passive techniques that uses focus information to estimate the three-dimensional shape of an object in the scene. Images are taken at multiple positions along the optical axis of the imaging device and are stored in a stack. In order to reconstruct the three dimensional shape of the object, the best-focused positions are acquired by maximizing the focus curves obtained via application of a focus measure operator. In this article, Deep Neural Network (DNN) is employed to extract the more accurate depth of each object point in the image stack. The size of each image in the stack is first reduced and then provided to the proposed DNN network to aggregate the shape. The initial shape is refined by applying a median filter, and later the reconstructed shape is sized back to original by utilizing bi-linear interpolation. The results are compared with commonly used focus measure operators by employing root mean squared error (RMSE), correlation, and image quality index (Q). Compared to other methods, the proposed SFF method using DNN shows higher precision and low computational time consumption.
Shape from Focus (SFF) has been studied extensively in computer vision for 3D shape and depth recovery. The first stage in SFF methods is to compute the focus value of every pixel by converting the colored images into gray scale and then apply the focus measure operator. Converting colored values in the images into gray scale values may lead to imprecise mapping of pixels with different colored values onto the same gray scale value, this affects the overall accuracy of the system. In a colored image, the focused pixels maintain a considerable color difference from their neighboring pixels as compared to the defocused ones, which are blended into their neighborhood. This article presents an alternative method to measure the degree of focus by directly processing colored images. The color differences of the neighbor pixels with respect to the central pixel are obtained and summed together, this is followed by calculating their spread. The sum and the spread are combined to measure the degree of focus of the pixel in consideration. The proposed focus measure is then used for shape recovery of various simulated and real objects and is compared with previous techniques. The comparison results show the proposed method has the highest correlation and smallest RMSE values confirming the effectiveness of using color images for shape recovery.
3D shape recovery of the object from its 2D images based on Image Focus has been an important field of research. Shape from Focus (SFF) is one of the passive methods to recover the shape of the object. Mostly, existing approaches work well with dense textured objects; however, they cannot compute depth of weak textured scenes with great precision. In this paper, we propose a new SFF algorithm which improves the recovered shape of weak textured objects. The proposed method is experimented and its performance is tested using different image sequences of synthetic and real objects with varying texture. The proposed approach provides better results as compared to previous approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.