Summary Quantitative measurements of colour, pattern and morphology are vital to a growing range of disciplines. Digital cameras are readily available and already widely used for making these measurements, having numerous advantages over other techniques, such as spectrometry. However, off‐the‐shelf consumer cameras are designed to produce images for human viewing, meaning that their uncalibrated photographs cannot be used for making reliable, quantitative measurements. Many studies still fail to appreciate this, and of those scientists who are aware of such issues, many are hindered by a lack of usable tools for making objective measurements from photographs.We have developed an image processing toolbox that generates images that are linear with respect to radiance from the RAW files of numerous camera brands and can combine image channels from multispectral cameras, including additional ultraviolet photographs. Images are then normalised using one or more grey standards to control for lighting conditions. This enables objective measures of reflectance and colour using a wide range of consumer cameras. Furthermore, if the camera's spectral sensitivities are known, the software can convert images to correspond to the visual system (cone‐catch values) of a wide range of animals, enabling human and non‐human visual systems to be modelled. The toolbox also provides image analysis tools that can extract luminance (lightness), colour and pattern information. Furthermore, all processing is performed on 32‐bit floating point images rather than commonly used 8‐bit images. This increases precision and reduces the likelihood of data loss through rounding error or saturation of pixels, while also facilitating the measurement of objects with shiny or fluorescent properties.All cameras tested using this software were found to demonstrate a linear response within each image and across a range of exposure times. Cone‐catch mapping functions were highly robust, converting images to several animal visual systems and yielding data that agreed closely with spectrometer‐based estimates.Our imaging toolbox is freely available as an addition to the open source ImageJ software. We believe that it will considerably enhance the appropriate use of digital cameras across multiple areas of biology, in particular researchers aiming to quantify animal and plant visual signals.
1. To understand the function of colour signals in nature, we require robust quantitative analytical frameworks to enable us to estimate how animal and plant colour patterns appear against their natural background as viewed by ecologically relevant species. Due to the quantitative limitations of existing methods, colour and pattern are rarely analysed in conjunction with one another, despite a large body of literature and decades of research on the importance of spatio-chromatic colour pattern analyses. Furthermore, key physiological limitations of animal visual systems such as spatial acuity, spectral sensitivities, photoreceptor abundances and receptor noise levels are rarely considered together in colour pattern analyses.2. Here, we present a novel analytical framework, called the Quantitative Colour Pattern Analysis (QCPA). We have overcome many quantitative and qualitative limitations of existing colour pattern analyses by combining calibrated digital photography and visual modelling. We have integrated and updated existing spatio-chromatic colour pattern analyses, including adjacency, visual contrast and boundary strength analysis, to be implemented using calibrated digital photography through the Multispectral Image Analysis and Calibration (MICA) Toolbox.3. This combination of calibrated photography and spatio-chromatic colour pattern analyses is enabled by the inclusion of psychophysical colour and luminance discrimination thresholds for image segmentation, which we call 'Receptor Noise Limited Clustering', used here for the first time. Furthermore, QCPA provides a novel psycho-physiological approach to the modelling of spatial acuity using convolution in the spatial or frequency domains, followed by 'Receptor Noise Limited Ranked Filtering' to eliminate intermediate edge artefacts and recover sharp boundaries following smoothing. We also present a new type of colour pattern analysis, the 'local edge intensity analysis' as well as a range of novel psycho-physiological approaches to the visualization of spatio-chromatic data. QCPA combines novel and existing pattern analysis frameworks into what we hope is a unified, free and open source toolbox and introduces a range of novel analyticalThis is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Evading detection by predators is crucial for survival. Camouflage is therefore a widespread adaptation, but despite substantial research effort our understanding of different camouflage strategies has relied predominantly on artificial systems and on experiments disregarding how camouflage is perceived by predators. Here we show for the first time in a natural system, that survival probability of wild animals is directly related to their level of camouflage as perceived by the visual systems of their main predators. Ground-nesting plovers and coursers flee as threats approach, and their clutches were more likely to survive when their egg contrast matched their surrounds. In nightjars – which remain motionless as threats approach – clutch survival depended on plumage pattern matching between the incubating bird and its surrounds. Our findings highlight the importance of pattern and luminance based camouflage properties, and the effectiveness of modern techniques in capturing the adaptive properties of visual phenotypes.
BackgroundQuantifying the conspicuousness of objects against particular backgrounds is key to understanding the evolution and adaptive value of animal coloration, and in designing effective camouflage. Quantifying detectability can reveal how colour patterns affect survival, how animals’ appearances influence habitat preferences, and how receiver visual systems work. Advances in calibrated digital imaging are enabling the capture of objective visual information, but it remains unclear which methods are best for measuring detectability. Numerous descriptions and models of appearance have been used to infer the detectability of animals, but these models are rarely empirically validated or directly compared to one another. We compared the performance of human ‘predators’ to a bank of contemporary methods for quantifying the appearance of camouflaged prey. Background matching was assessed using several established methods, including sophisticated feature-based pattern analysis, granularity approaches and a range of luminance and contrast difference measures. Disruptive coloration is a further camouflage strategy where high contrast patterns disrupt they prey’s tell-tale outline, making it more difficult to detect. Disruptive camouflage has been studied intensely over the past decade, yet defining and measuring it have proven far more problematic. We assessed how well existing disruptive coloration measures predicted capture times. Additionally, we developed a new method for measuring edge disruption based on an understanding of sensory processing and the way in which false edges are thought to interfere with animal outlines.ResultsOur novel measure of disruptive coloration was the best predictor of capture times overall, highlighting the importance of false edges in concealment over and above pattern or luminance matching.ConclusionsThe efficacy of our new method for measuring disruptive camouflage together with its biological plausibility and computational efficiency represents a substantial advance in our understanding of the measurement, mechanism and definition of disruptive camouflage. Our study also provides the first test of the efficacy of many established methods for quantifying how conspicuous animals are against particular backgrounds. The validation of these methods opens up new lines of investigation surrounding the form and function of different types of camouflage, and may apply more broadly to the evolution of any visual signal.Electronic supplementary materialThe online version of this article (doi:10.1186/s12862-016-0854-2) contains supplementary material, which is available to authorized users.
Illumination varies greatly both across parts of a natural scene and as a function of time, whereas the spectral reflectance function of surfaces remains more stable and is of much greater relevance when searching for specific targets. This study investigates the functional properties of postreceptoral opponent-channel responses, in particular regarding their stability against spatial and temporal variation in illumination. We studied images of natural scenes obtained in UK and Uganda with digital cameras calibrated to produce estimated L-, M-, and S-cone responses of trichromatic primates (human) and birds (starling). For both primates and birds we calculated luminance and red-green opponent (RG) responses. We also calculated a primate blue-yellow-opponent (BY) response. The BY response varies with changes in illumination, both across time and across the image, rendering this factor less invariant. The RG response is much more stable than the BY response across such changes in illumination for primates, less so for birds. These differences between species are due to the greater separation of bird L and M cones in wavelength and the narrower bandwidth of the cone action spectra. This greater separation also produces a larger chromatic signal for a given change in spectral reflectance. Thus bird vision seems to suffer a greater degree of spatiotemporal "clutter" than primate vision, but also enhances differences between targets and background. Therefore, there may be a trade-off between the degree of chromatic clutter in a visual system versus the degree of chromatic difference between a target and its background. Primate and bird visual systems have found different solutions to this trade-off.
Cuckoo eggs famously mimic those of their foster parents to evade rejection from discriminating hosts. Here we test whether parasites benefit by repeatedly parasitizing the same host nest. This should make accurate rejection decisions harder, regardless of the mechanism that hosts use to identify foreign eggs. Here we find strong support for this prediction in the African tawny-flanked prinia (Prinia subflava), the most common host of the cuckoo finch (Anomalospiza imberbis). We show experimentally that hosts reject eggs that differ from an internal template, but crucially, as the proportion of foreign eggs increases, hosts are less likely to reject them and require greater differences in appearance to do so. Repeated parasitism by the same cuckoo finch female is common in host nests and likely to be an adaptation to increase the probability of host acceptance. Thus, repeated parasitism interacts with egg mimicry to exploit cognitive and sensory limitations in host defences.
Tool use is so rare in the animal kingdom that its evolutionary origins cannot be traced with comparative analyses. Valuable insights can be gained from investigating the ecological context and adaptive significance of tool use under contemporary conditions, but obtaining robust observational data is challenging. We assayed individual-level tool-use dependence in wild New Caledonian crows by analyzing stable isotope profiles of the birds' feathers, blood, and putative food sources. Bayesian diet-mixing models revealed that a substantial amount of the crows' protein and lipid intake comes from prey obtained with stick tools--wood-boring beetle larvae. Our calculations provide estimates of larva-intake rates and show that just a few larvae can satisfy a crow's daily energy requirements, highlighting the substantial rewards available to competent tool users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.