Shadows are ubiquitous in image and video data, and their removal is of interest in both Computer Vision and Graphics. We present an interactive, robust and high quality method for fast shadow removal. To perform detection we use an on-the-fly learning approach guided by two rough user inputs for the pixels of the shadow and the lit area. From this we derive a fusion image that magnifies shadow boundary intensity change due to illumination variation. After detection, we perform shadow removal by registering the penumbra to a normalised frame which allows us to efficiently estimate non-uniform shadow illumination changes, resulting in accurate and robust removal. We also present a reliable, validated and multi-scene category ground truth for shadow removal algorithms which overcomes issues such as inconsistencies between shadow and shadowfree images and limited variations in shadows. Using our data, we perform the most thorough comparison of state of the art shadow removal methods to date. Our algorithm outperforms the state of the art, and we supply our code and evaluation data and scripts to encourage future open comparisons. Shadow removal ground truth The first public data set was supplied in [2]. In our work, we propose a new data set that introduces multiple shadow categories, and overcomes potential environmental illumination and registration errors between the shadow and ground truth images. An example of comparison is shown in Fig. 1. Our new data set avoids these issues using a careful capture setup and a quantitative test for rejecting unavoidable capture failures due to environmental effects. Our images are also categorised according to 4 different attributes. . An example from our data without these properties is shown in (c).Our algorithm consists of 3 steps (see Fig. 2): 1) Pre-processing We detect an initial shadow mask ( Fig. 2(b)) using a KNN classifier trained from data from two rough user inputs (e.g. Fig. 2(a)). We generate a fusion image, which magnifies illumination discontinuities around shadow boundaries, by fusing channels of YCrCb colour space and suppressing texture (Fig. 2(c)).2) Penumbra unwrapping Based on the detected shadow mask and fusion image, we sample the pixel intensities of sampling lines perpendicular to the shadow boundary ( Fig. 2(d)), remove noisy ones and store the remaining as columns for the initial penumbra strip (Fig. 2(e)). We align the initial columns' illumination changes using its intensity conversion image ( Fig. 2(f)). This results in an aligned penumbra strip (Fig. 2(g)) whose conversion image (Fig. 2(h)) exhibits a stabler profile.3) Estimation of shadow scale and relighting Unlike previous work [1, 2], we do not assume a constrained model of illumination change. The columns of penumbra strip are first clustered into a few small groups. A unified sample can be synthesised by averaging the samples of each group (e.g. Fig. 2(i)). Our shadow scale is adaptively and quickly derived from the unified samples which cancel texture noise. The derived sparse scales f...
To find a way to promote the rate of carbon flux and further improve the photosynthetic rate in rice, two CO2-transporting and fixing relevant genes, Ictb and FBP/Sbpase, which were derived from cyanobacteria with the 35SCaMV promotor in the respective constructs, were transformed into rice. Three homologous transgenic groups with Ictb, FBP/Sbpase and the two genes combined were constructed in parallel, and the functional effects of these two genes were investigated by physiological, biochemical and leaf anatomy analyses. The results indicated that the mesophyll conductance and net photosynthetic rate were higher at approximately 10.5–36.8% and 13.5–34.6%, respectively, in the three groups but without any changes in leaf anatomy structure compared with wild type. Other physiological and biochemical parameters increased with the same trend in the three groups, which showed that the effect of FBP/SBPase on improving photosynthetic capacity was better than that of ICTB and that there was an additive effect in ICTB+FBP/SBPase. ICTB localized in the cytoplasm, whereas FBP/SBPase was successfully transported to the chloroplast. The two genes might show a synergistic interaction to promote carbon flow and the assimilation rate as a whole. The multigene transformation engineering and its potential utility for improving the photosynthetic capacity and yield in rice were discussed.
SEDS family peptidoglycan (PG) glycosyltransferases, RodA and FtsW, require their cognate transpeptidases PBP2 and FtsI (class B penicillin binding proteins) to synthesize PG along the cell cylinder and at the septum, respectively. The activities of these SEDS-bPBPs complexes are tightly regulated to ensure proper cell elongation and division. In Escherichia coli FtsN switches FtsA and FtsQLB to the active forms that synergize to stimulate FtsWI, but the exact mechanism is not well understood. Previously, we isolated an activation mutation in ftsW (M269I) that allows cell division with reduced FtsN function. To try and understand the basis for activation we isolated additional substitutions at this position and found that only the original substitution produced an active mutant whereas drastic changes resulted in an inactive mutant. In another approach we isolated suppressors of an inactive FtsL mutant and obtained FtsWE289G and FtsIK211I and found they bypassed FtsN. Epistatic analysis of these mutations and others confirmed that the FtsN-triggered activation signal goes from FtsQLB to FtsI to FtsW. Mapping these mutations, as well as others affecting the activity of FtsWI, on the RodA-PBP2 structure revealed they are located at the interaction interface between the extracellular loop 4 (ECL4) of FtsW and the pedestal domain of FtsI (PBP3). This supports a model in which the interaction between the ECL4 of SEDS proteins and the pedestal domain of their cognate bPBPs plays a critical role in the activation mechanism.
A user-centric method for fast, interactive, robust, and high-quality shadow removal is presented. Our algorithm can perform detection and removal in a range of difficult cases, such as highly textured and colored shadows. To perform detection, an on-the-fly learning approach is adopted guided by two rough user inputs for the pixels of the shadow and the lit area. After detection, shadow removal is performed by registering the penumbra to a normalized frame, which allows us efficient estimation of nonuniform shadow illumination changes, resulting in accurate and robust removal. Another major contribution of this work is the first validated and multiscene category ground truth for shadow removal algorithms. This data set containing 186 images eliminates inconsistencies between shadow and shadow-free images and provides a range of different shadow types such as soft, textured, colored, and broken shadow. Using this data, the most thorough comparison of state-of-the-art shadow removal methods to date is performed, showing our proposed algorithm to outperform the state of the art across several measures and shadow categories. To complement our data set, an online shadow removal benchmark website is also presented to encourage future open comparisons in this challenging field of research.
The recently discovered color homography theorem proves that colors across a change in photometric viewing condition are related by a homography [2]. In this paper, we propose a color-homography-based color transfer decomposition which encodes color transfer as a combination of chromaticity shift and shading adjustment. Our experiments show that the proposed color transfer decomposition provides a very close approximation to many popular color transfer methods. We believe that our color transfer model is useful and fundamental for developing simple and efficient color transfer algorithms. Our model also enables users to amend the imperfections of a color transfer result or extract a concise form of the original desired effect (which allows a more efficient re-application of the original color transfer). In Figure 1, we start with the outputs of the prior-art algorithms. Assuming we relate source image I s to target image I t with a pixel-wise correspondence, we represent the RGBs of I s and I t as two n × 3 matrices A and B respectively where n is the number of pixels. These n × 3 matrices can be reconstituted into the original image grids. The chromaticity mapping is modeled as a 3 × 3 linear transform but because of the relative positions of light and surfaces there might also be per-pixel shading perturbations. Assume the Lambertian image formation is an accurate physical model,where D is an n × n diagonal matrix of shading factors and H is a 3 × 3 chromaticity mapping matrix. A color transfer can be decomposed into a diagonal shading matrix D and a homography matrix H. The homography matrix H is a global chromaticity mapping. The matrix D can be seen as a change of surface reflectance or position of illuminant. Equation 1 can be solved by Alternating Least Squares [2]. To apply the extracted color transfer effect to a different scene, the shading adjustment D is further modeled as a smooth brightness-to-shading function f as follows:which is denoted by Mapped Shading Homography. We show some visual results of color transfer approximations in Figure 2. Global 3D affine mapping [3] does not well reproduce the shading adjustments of color transfer. In Figure 3, the original shading homography approximation retains the artifacts of noise and over-saturation of the
When we place a colored filter in front of a camera the effective camera response functions are equal to the given camera spectral sensitivities multiplied by the filter spectral transmittance. In this paper, we solve for the filter which returns the modified sensitivities as close to being a linear transformation from the color matching functions of human visual system as possible. When this linearity condition -sometimes called the Luther condition -is approximately met, the 'camera+filter' system can be used for accurate color measurement. Then, we reformulate our filter design optimisation for making the sensor responses as close to the CIEXYZ tristimulus values as possible given the knowledge of real measured surfaces and illuminants spectra data. This data-driven method in turn is extended to incorporate constraints on the filter (smoothness and bounded transmission). Also, because how the optimisation is initialised is shown to impact on the performance of the solved-for filters, a multi-initialisation optimisation is developed.Experiments demonstrate that, by taking pictures through our optimised color filters we can make cameras significantly more colorimetric.
Ag nanowire (NW) arrays with NW diameter d NW =12-120 nm were electrodeposited in anodic aluminum oxide templates. Strong avalanche photoluminescence (PL) from Ag NW arrays with small d NW were observed near 914 nm by using picosecond laser at the excitation wavelength 808 nm, which is originated from the plasmonenhanced radiative intraband transitions. The peak PL intensity of the avalanche PL from the sample with small diameter d NW =12 nm is about 10 2 times stronger than that of the linear PL from the sample with large diameter d NW =120 nm. The opposite excitation polarization dependence and emission polarization distribution of the PL from Ag NW array with d NW =12 nm and d NW =120 nm were also observed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.