Abstract-This paper is concerned with the derivation of a progression of shadow-free image representations. First we show that adopting certain assumptions about lights and cameras leads to a 1-d, grey-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1-d representation to an equivalent 2-d, chromaticity representation. We show that in this 2-d representation, it is possible to re-light all the image pixels in the same way, effectively deriving a 2-d image representation which is additionally shadow-free. Finally, we show how to recover a 3-d, full colour shadow-free image representation by first (with the help of the 2-d representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting, and we propose a method to re-integrate this thresholded edge map, thus deriving the sought-after 3-d shadow-free image.
Abstract.A method was recently devised for the recovery of an invariant image from a 3-band colour image. The invariant image, originally 1D greyscale but here derived as a 2D chromaticity, is independent of lighting, and also has shading removed: it forms an intrinsic image that may be used as a guide in recovering colour images that are independent of illumination conditions. Invariance to illuminant colour and intensity means that such images are free of shadows, as well, to a good degree. The method devised finds an intrinsic reflectivity image based on assumptions of Lambertian reflectance, approximately Planckian lighting, and fairly narrowband camera sensors. Nevertheless, the method works well when these assumptions do not hold. A crucial piece of information is the angle for an "invariant direction" in a log-chromaticity space. To date, we have gleaned this information via a preliminary calibration routine, using the camera involved to capture images of a colour target under different lights. In this paper, we show that we can in fact dispense with the calibration step, by recognizing a simple but important fact: the correct projection is that which minimizes entropy in the resulting invariant image. To show that this must be the case we first consider synthetic images, and then apply the method to real images. We show that not only does a correct shadow-free image emerge, but also that the angle found agrees with that recovered from a calibration. As a result, we can find shadow-free images for images with unknown camera, and the method is applied successfully to remove shadows from unsourced imagery.
Abstract.Recently, a method for removing shadows from colour images was developed [Finlayson, Hordley, Lu, and Drew, PAMI2006] that relies upon finding a special direction in a 2D chromaticity feature space. This "invariant direction" is that for which particular colour features, when projected into 1D, produce a greyscale image which is approximately invariant to intensity and colour of scene illumination. Thus shadows, which are in essence a particular type of lighting, are greatly attenuated. The main approach to finding this special angle is a camera calibration: a colour target is imaged under many different lights, and the direction that best makes colour patch images equal across illuminants is the invariant direction. Here, we take a different approach. In this work, instead of a camera calibration we aim at finding the invariant direction from evidence in the colour image itself. Specifically, we recognize that producing a 1D projection in the correct invariant direction will result in a 1D distribution of pixel values that have smaller entropy than projecting in the wrong direction. The reason is that the correct projection results in a probability distribution spike, for pixels all the same except differing by the lighting that produced their observed RGB values and therefore lying along a line with orientation equal to the invariant direction. Hence we seek that projection which produces a type of intrinsic, independent of lighting reflectance-information only image by minimizing entropy, and from there go on to remove shadows as previously. To be able to develop an effective description of the entropy-minimization task, we go over to the quadratic entropy, rather than Shannon's definition. Replacing the observed pixels with a kernel density probability distribution, the quadratic entropy can be written as a very simple formulation, and can be evaluated using the efficient Fast Gauss Transform. The entropy, written in this embodiment, has the advantage that it is more insensitive to quantization than is the usual definition. The resulting algorithm is quite reliable, and the shadow removal step produces good shadow-free colour image results whenever strong shadow edges are present in the image. In most cases studied, entropy has a strong minimum for the invariant direction, revealing a new property of image formation.
A set of features related to density and spatial architecture of TILs was found to be associated with a likelihood of recurrence of early-stage NSCLC. This information could potentially be used for helping in treatment planning and management of early-stage NSCLC.
Early-stage estrogen receptor-positive (ER+) breast cancer (BCa) is the most common type of BCa in the United States. One critical question with these tumors is identifying which patients will receive added benefit from adjuvant chemotherapy. Nuclear pleomorphism (variance in nuclear shape and morphology) is an important constituent of breast grading schemes, and in ER+ cases, the grade is highly correlated with disease outcome. This study aimed to investigate whether quantitative computer-extracted image features of nuclear shape and orientation on digitized images of hematoxylin-stained and eosinstained tissue of lymph node-negative (LN−), ER+ BCa could help stratify patients into discrete (<10 years short-term vs. >10 years long-term survival) outcome groups independent of standard clinical and pathological parameters. We considered a tissue microarray (TMA) cohort of 276 ER+, LN− patients comprising 150 patients with long-term and 126 patients with short-term overall survival, wherein 177 randomly chosen cases formed the modeling set, and 99 remaining cases the test set. Segmentation of individual nuclei was performed using multiresolution watershed; subsequently, 615 features relating to nuclear shape/texture and orientation disorder were extracted from each TMA spot. The Wilcoxon’s rank-sum test identified the 15 most prognostic quantitative histomorphometric features within the modeling set. These features were then subsequently combined via a linear discriminant analysis classifier and evaluated on the test set to assign a probability of long-term vs. short-term disease-specific survival. In univariate survival analysis, patients identified by the image classifier as high risk had significantly poorer survival outcome: hazard ratio (95% confident interval) = 2.91(1.23–6.92), p = 0.02786. Multivariate analysis controlling for T-stage, histology grade, and nuclear grade showed the classifier to be independently predictive of poorer survival: hazard ratio (95% confident interval) = 3.17(0.33–30.46), p = 0.01039. Our results suggest that quantitative histomorphometric features of nuclear shape and orientation are strongly and independently predictive of patient survival in ER+, LN− BCa.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.