The phenomenon of colour constancy in human visual perception keeps surface colours constant, despite changes in their reflected light due to changing illumination. Although colour constancy has evolved under a constrained subset of illuminations, it is unknown whether its underlying mechanisms, thought to involve multiple components from retina to cortex, are optimised for particular environmental variations. Here we demonstrate a new method for investigating colour constancy using illumination matching in real scenes which, unlike previous methods using surface matching and simulated scenes, allows testing of multiple, real illuminations. We use real scenes consisting of solid familiar or unfamiliar objects against uniform or variegated backgrounds and compare discrimination performance for typical illuminations from the daylight chromaticity locus (approximately blue-yellow) and atypical spectra from an orthogonal locus (approximately red-green, at correlated colour temperature 6700 K), all produced in real time by a 10-channel LED illuminator. We find that discrimination of illumination changes is poorer along the daylight locus than the atypical locus, and is poorest particularly for bluer illumination changes, demonstrating conversely that surface colour constancy is best for blue daylight illuminations. Illumination discrimination is also enhanced, and therefore colour constancy diminished, for uniform backgrounds, irrespective of the object type. These results are not explained by statistical properties of the scene signal changes at the retinal level. We conclude that high-level mechanisms of colour constancy are biased for the blue daylight illuminations and variegated backgrounds to which the human visual system has typically been exposed.
Cameras record three color responses (RG B) which are device dependent. Camera coordinates are mapped to a standard color space, such as XYZ-useful for color measurement-by a mapping function, e.g., the simple 3×3 linear transform (usually derived through regression). This mapping, which we will refer to as linear color correction (LCC), has been demonstrated to work well in the number of studies. However, it can map RG Bs to XYZs with high error. The advantage of the LCC is that it is independent of camera exposure. An alternative and potentially more powerful method for color correction is polynomial color correction (PCC). Here, the R, G, and B values at a pixel are extended by the polynomial terms. For a given calibration training set PCC can significantly reduce the colorimetric error. However, the PCC fit depends on exposure, i.e., as exposure changes the vector of polynomial components is altered in a nonlinear way which results in hue and saturation shifts. This paper proposes a new polynomial-type regression loosely related to the idea of fractional polynomials which we call root-PCC (RPCC). Our idea is to take each term in a polynomial expansion and take its kth root of each k-degree term. It is easy to show terms defined in this way scale with exposure. RPCC is a simple (low complexity) extension of LCC. The experiments presented in this paper demonstrate that RPCC enhances color correction performance on real and synthetic data.
This paper describes the use of color image analysis to automatically discriminate between oesophagus, stomach, small intestine, and colon tissue in wireless capsule endoscopy (WCE). WCE uses "pill-cam" technology to recover color video imagery from the entire gastrointestinal tract. Accurately reviewing and reporting this data is a vital part of the examination, but it is tedious and time consuming. Automatic image analysis tools play an important role in supporting the clinician and speeding up this process. Our approach first divides the WCE image into subimages and rejects all subimages in which tissue is not clearly visible. We then create a feature vector combining color, texture, and motion information of the entire image and valid subimages. Color features are derived from hue saturation histograms, compressed using a hybrid transform, incorporating the discrete cosine transform and principal component analysis. A second feature combining color and texture information is derived using local binary patterns. The video is segmented into meaningful parts using support vector or multivariate Gaussian classifiers built within the framework of a hidden Markov model. We present experimental results that demonstrate the effectiveness of this method.
In this article, we describe a spectral sensitivity measurement procedure at the National Physical Laboratory, London, with the aim of obtaining ground truth spectral sensitivity functions for Nikon D5100 and Sigma SD1 Merill cameras. The novelty of our data is that the potential measurement errors are estimated at each wavelength. We determine how well the measured spectral sensitivity functions represent the actual camera sensitivity functions (as a function of wavelength). The second contribution of this paper is to test the performance of various leading sensor estimation techniques implemented from the literature using measured and synthetic data and also evaluate them based on ground truth data for the two cameras. We conclude that the estimation techniques tested are not sufficiently accurate when compared with our measured ground truth data and that there remains significant scope to improve estimation algorithms for spectral estimation. To help in this endeavor, we will make all our data available online for the community.
We present a computer vision tool that analyses video from a CCTV system installed on fishing trawlers to monitor discarded fish catch. The system aims to support expert observers who review the footage and verify numbers, species and sizes of discarded fish. The operational environment presents a significant challenge for these tasks. Fish are processed below deck under fluorescent lights, they are randomly oriented and there are multiple occlusions. The scene is unstructured and complicated by the presence of fishermen processing the catch. We describe an approach to segmenting the scene and counting fish that exploits the N 4 -Fields algorithm. We performed extensive tests of the algorithm on a data set comprising 443 frames from 6 belts. Results indicate the relative count error (for individual fish) ranges from 2% to 16%. We believe this is the first system that is able to handle footage from operational trawlers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.