2018
DOI: 10.1093/beheco/ary017
|View full text |Cite
|
Sign up to set email alerts
|

Comparing colors using visual models

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
79
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 69 publications
(79 citation statements)
references
References 61 publications
0
79
0
Order By: Relevance
“…Furthermore, there are inherent assumptions with these visual models: (2) chromatic and achromatic visual channels operate independently of one another; (2) that color is coded by n ‐1 opponent channels; and (3) that the limits to color discrimination are set by noise arising in receptors ((Vorobyev & Osorio, ; Kelber et al ., ; Olsson et al ., ; Maia & White, ). Lastly, our own implementation of these visual models recognizes that we are modeling the perception of light sources against a homogeneously dark background and that eyes and other mechanisms will be adapted for the light intensity of the light sources.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, there are inherent assumptions with these visual models: (2) chromatic and achromatic visual channels operate independently of one another; (2) that color is coded by n ‐1 opponent channels; and (3) that the limits to color discrimination are set by noise arising in receptors ((Vorobyev & Osorio, ; Kelber et al ., ; Olsson et al ., ; Maia & White, ). Lastly, our own implementation of these visual models recognizes that we are modeling the perception of light sources against a homogeneously dark background and that eyes and other mechanisms will be adapted for the light intensity of the light sources.…”
Section: Methodsmentioning
confidence: 99%
“…longer wavelengths for HPS). discrimination are set by noise arising in receptors ( Kelber et al, 2003;Olsson et al, 2017;Maia & White, 2018). Lastly, our own implementation of these visual models recognizes that we are modeling the perception of light sources against a homogeneously dark background and that eyes and other mechanisms will be adapted for the light intensity of the light sources.…”
Section: Visual System Stimulation Of Artificial Light Sources: Just mentioning
confidence: 99%
“…To determine whether hypothetical models and mimics are statistically different in plumage 186 coloration, we used permutation-based analyses of variance (PERMANOVAs) using perceptual 187 color distances in the R package "vegan" [47]. We used 999 permutations and recorded the pseudo-188 f, the significance of the analysis (a=0.05), and the R 2 [40]. To evaluate whether plumage patches 189 showing statistical differences in reflectance are also perceptually distinguishable we did a 190 bootstrap analysis to calculate a mean distance and a confidence interval in JNDs [40].…”
Section: Introductionmentioning
confidence: 99%
“…We used 999 permutations and recorded the pseudo-188 f, the significance of the analysis (a=0.05), and the R 2 [40]. To evaluate whether plumage patches 189 showing statistical differences in reflectance are also perceptually distinguishable we did a 190 bootstrap analysis to calculate a mean distance and a confidence interval in JNDs [40]. If two colors 191 are statistically distinct and the lower bound of the bootstrapped confidence interval is higher than 192 the established JND threshold value, then one can conclude that these colors are statistically distinct 193 and perceptually different given a visual model [40].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation