We conduct a thorough study of photometric stereo under nearby point light source illumination, from modeling to numerical solution, through calibration. In the classical formulation of photometric stereo, the luminous fluxes are assumed to be directional, which is very difficult to achieve in practice. Rather, we use light-emitting diodes (LEDs) to illuminate the scene to be reconstructed. Such point light sources are very convenient to use, yet they yield a more complex photometric stereo model which is arduous to solve. We first derive in a physically sound manner this model, and show how to calibrate its parameters. Then, we discuss two state-of-the-art numerical solutions. The first
The need for efficient normal integration methods is driven by several computer vision tasks such as shape-from-shading, photometric stereo, deflectometry, etc. In the first part of this survey, we select the most important properties that one may expect from a normal integration method, based on a thorough study of two pioneering works by Horn and Brooks [28] and by Frankot and Chellappa [19]. Apart from accuracy, an integration method should at least be fast and robust to a noisy normal field. In addition, it should be able to handle several types of boundary condition, including the case of a free boundary, and a reconstruction domain of any shape i.e., which is not necessarily rectangular. It is also much appreciated that a minimum number of parameters have to be tuned, or even no parameter at all. Finally, it should preserve the depth discontinuities. In the second part of this survey, we review most of the existing methods in view of this analysis, and conclude that none of them satisfies all of the required properties. This work is complemented by a companion paper entitled Variational Methods for Normal Integration, in which we focus on the problem of normal integration in the presence of depth discontinuities, a problem which occurs as soon as there are occlusions.
Shape from shading with multiple light sources is an active research area, and a diverse range of approaches have been proposed in recent decades. However, devising a robust reconstruction technique still remains a challenging goal, as the image acquisition process is highly nonlinear. Recent Photometric Stereo variants rely on simplifying assumptions in order to make the problem solvable: light propagation is still commonly assumed to be uniform, and the Bidirectional Reflectance Distribution Function is assumed to be diffuse, with limited interest for specular materials. In this work, we introduce a well-posed formulation based on partial differential equations (PDEs) for a unified reflectance function that can model both diffuse and specular reflections. We base our derivation on ratio of images, which makes the model independent from photometric invariants and yields a well-posed differential problem based on a system of quasi-linear PDEs with discontinuous coefficients. In addition, we directly solve a differential problem for the unknown depth, thus avoiding the intermediate step of approximating the normal field. A variational approach is presented ensuring robustness to noise and outliers (such as black shadows), and this is confirmed with a wide range of experiments on both synthetic and real data, where we compare favorably to the state of the art.Reflectance. Most of the research done till now for the PS technique has assumed purely diffuse reflectance as the Bidirectional Reflectance Distribution Function (BRDF). Unlike (a) and (b), shape recovery from specular shading still remains a challenging goal since most of the common materials provide specular highlights that prevent reasonable reconstructions by the PS technique.Regarding shading models for specular highlights, several dedicated irradiance equations have been presented so far. First, Torrance and Sparrow [56] presented a physical model based on radiometry principles. Later, Phong [48] showed an empirical model which basically extended the cosine law, making it depend also on the viewer direction. The Blinn-Phong shading model [4] extended further the previous one by eliminating some limitation in the analytical formulation. Then Cook and Torrance [14] provided a well-known specular model based on a strongly nonlinear physical theory. Other important models for specular BRDFs can be found in [33], and interesting comparisons among some of them have been performed in [44].Instead of simplifying the PS problem by removing specularity, other works dealt with the images as they are, having both specular and diffuse components. Nayar, Ikeuchi, and Kanade [43] assumed hybrid surfaces by summing diffuse and specular components using the Beckmann and Spizzichino [3] reflection model, allowing them to locally reconstruct the shape of the object. Ikeuchi [28] faced the PS problem with specular reflectance by introducing a smoothness prior on the surface, which was shown to be realistic for several industrial applications, though this approach is limite...
This paper tackles the photometric stereo problem in the presence of inaccurate lighting, obtained either by calibration or by an uncalibrated photometric stereo method. Based on a precise modeling of noise and outliers, a robust variational approach is introduced. It explicitly accounts for self-shadows, and enforces robustness to castshadows and specularities by resorting to redescending Mestimators. The resulting non-convex model is solved by means of a computationally efficient alternating reweighted least-squares algorithm. Since it implicitly enforces integrability, the new variational approach can refine both the intensities and the directions of the lighting.
Input HR RGB images and LR depth mapsOutput HR albedo and depth maps Relighting Figure 1: Given an RGB-D sequence of n ≥ 4 low-resolution (320 × 240 px) depth maps and high-resolution (1280 × 1024 px) RGB images acquired from the same viewing angle but under varying, unknown lighting, high-resolution depth and reflectance maps are estimated by combining super-resolution and photometric stereo within a variational framework. AbstractA novel depth super-resolution approach for RGB-D sensors is presented. It disambiguates depth super-resolution through high-resolution photometric clues and, symmetrically, it disambiguates uncalibrated photometric stereo through low-resolution depth cues. To this end, an RGB-D sequence is acquired from the same viewing angle, while illuminating the scene from various uncalibrated directions. This sequence is handled by a variational framework which fits high-resolution shape and reflectance, as well as lighting, to both the low-resolution depth measurements and the high-resolution RGB ones. The key novelty consists in a new PDE-based photometric stereo regularizer which implicitly ensures surface regularity. This allows to carry out depth super-resolution in a purely data-driven manner, without the need for any ad-hoc prior or material calibration. Realworld experiments are carried out using an out-of-the-box RGB-D sensor and a hand-held LED light source.
Photometric stereo (PS) techniques nowadays remain constrained to an ideal laboratory setup where modeling and calibration of lighting is amenable. To eliminate such restrictions, we propose an efficient principled variational approach to uncalibrated PS under general illumination. To this end, the Lambertian reflectance model is approximated through a spherical harmonic expansion, which preserves the spatial invariance of the lighting. The joint recovery of shape, reflectance and illumination is then formulated as a single variational problem. There the shape estimation is carried out directly in terms of the underlying perspective depth map, thus implicitly ensuring integrability and bypassing the need for a subsequent normal integration. To tackle the resulting nonconvex problem numerically, we undertake a two-phase procedure to initialize a balloon-like perspective depth map, followed by a "lagged" block coordinate descent scheme. The experiments validate efficiency and robustness of this approach. Across a variety of evaluations, we are able to reduce the mean angular error consistently by a factor of 2-3 compared to the state-of-the-art. * Authors contributed equally. 1 https://github.com/zhenzhangye/general_ups
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.