Endogenous attention modulates the amplitude and phase coherence of steady-state visual-evoked potentials (SSVEPs). In efforts to decipher the neural mechanisms of attentional modulation, we compared the time course of attentional modulation of SSVEP amplitude (thought to reflect the magnitude of neural population activity) and phase coherence (thought to reflect neural response synchronization). We presented two stimuli flickering at different frequencies in the left and right visual hemifields and asked observers to shift their attention to either stimulus. Our results demonstrated that attention increased SSVEP phase coherence earlier than it increased SSVEP amplitude, with a positive correlation between the attentional modulations of SSVEP phase coherence and amplitude. Furthermore, the behavioral dynamics of attention shifts were more closely associated with changes in phase coherence than with changes in amplitude. These results are consistent with the possibility that attention increases neural response synchronization, which in turn leads to increased neural population activity.
Whether fundamental visual attributes, such as color, motion, and shape, are analyzed separately in specialized pathways has been one of the central questions of visual neuroscience. Although recent studies have revealed various forms of cross-attribute interactions, including significant contributions of color signals to motion processing, it is still widely believed that color perception is relatively independent of motion processing. Here, we report a new color illusion, motion-induced color mixing, in which moving bars, the color of each of which alternates between two colors (e.g., red and green), are perceived as the mixed color (e.g., yellow) even though the two colors are never superimposed on the retina. The magnitude of color mixture is significantly stronger than that expected from direction-insensitive spatial integration of color signals. This illusion cannot be ascribed to optical image blurs, including those induced by chromatic aberration, or to involuntary eye movements of the observer. Our findings indicate that color signals are integrated not only at the same retinal location, but also along a motion trajectory. It is possible that this neural mechanism helps us to see veridical colors for moving objects by reducing motion blur, as in the case of luminance-based pattern perception.
Despite numerous prior studies, important questions about the Japanese color lexicon persist, particularly about the number of Japanese basic color terms and their deployment across color space. Here, 57 native Japanese speakers provided monolexemic terms for 320 chromatic and 10 achromatic Munsell color samples. Through k-means cluster analysis we revealed 16 statistically distinct Japanese chromatic categories. These included eight chromatic basic color terms (aka/red, ki/yellow, midori/green, ao/blue, pink, orange, cha/brown, and murasaki/purple) plus eight additional terms: mizu ("water")/light blue, hada ("skin tone")/peach, kon ("indigo")/dark blue, matcha ("green tea")/yellow-green, enji/maroon, oudo ("sand or mud")/mustard, yamabuki ("globeflower")/gold, and cream. Of these additional terms, mizu was used by 98% of informants, and emerged as a strong candidate for a 12th Japanese basic color term. Japanese and American English color-naming systems were broadly similar, except for color categories in one language (mizu, kon, teal, lavender, magenta, lime) that had no equivalent in the other. Our analysis revealed two statistically distinct Japanese motifs (or color-naming systems), which differed mainly in the extension of mizu across our color palette. Comparison of the present data with an earlier study by Uchikawa & Boynton (1987) suggests that some changes in the Japanese color lexicon have occurred over the last 30 years.
We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.
Under colored illumination, the achromatic point (the point in the chromaticity diagram seen as colorless) shifts toward the chromaticity of the illuminant. This investigation measured the loci of achromatic points for various intensities of a test field presented in a real rather than a simulated environment, lit by illuminants of various chromaticities. The achromatic point varied markedly with the intensity level of the test field: for dim test fields it was close to the surround chromaticity, but for high luminance test fields it was almost invariant with the surround chromaticity. The varying achromatic settings imply a variation in the relative effectiveness of the different cone types, but this variation originates in the postreceptoral system rather than at the photoreceptors themselves: flicker photometric sensitivity was almost independent of the illuminant in all cases. Nor does the variation take the simple form of a sensitivity-scaling coefficient; such a model can not predict the observed dependence of the achromatic setting on test intensity. The data could, however, be modeled with a scheme in which the log of the relative cone weight implicit in the achromatic setting depends almost linearly on (1) the log of the relative cone excitation by the illuminant and (2) the log of the test field intensity.
The variability of color-selective neurons in human visual cortex is considered more diverse than cone-opponent mechanisms. We addressed this issue by deriving histograms of hue-selective voxels measured using fMRI with a novel stimulation paradigm, where the stimulus hue changed continuously. Despite the large between-subject difference in hue-selective histograms, individual voxels exhibited selectivity for intermediate hues, such as purple, cyan, and orange, in addition to those along cone-opponent axes. In order to rule the possibility out that the selectivity for intermediate hues emerged through spatial summation of activities of neurons selectively responding to cone-opponent signals, we further tested hue-selective adaptations in intermediate directions of cone-opponent axes, by measuring responses to 4 diagonal hues during concurrent adaptation to 1 of the 4 hues. The selective and unidirectional reduction in response to the adapted hue lends supports to our argument that cortical neurons respond selectively to intermediate hues.
The neural basis of illusory motion perception evoked from static images has not been established well. We examined changes in neural activity in motion sensitive areas of the human visual cortex by using functional magnetic resonance imaging (fMRI) technique when a static illusory-motion image ('Rotating Snakes') was presented to participants. The blood-oxygenation-level dependent (BOLD) signal changes were compared between the test stimulus that induced illusory motion perception and the control stimulus that did not. Comparison was also made between those stimuli with and without eye movements. Signal changes for the test stimulus were significantly larger than those for the control stimulus, if accompanied by eye movements. On the other hand, the difference in signal changes between test and control stimuli was smaller, if steady fixation was required. These results support the empirical finding that this illusion is related to some component of eye movements.
Perceptual color space is continuous; however, we tend to divide it into only a small number of categories. It is unclear whether categorical color perception is obtained solely through the development of the visual system or whether it is affected by language acquisition. To address this issue, we recruited prelinguistic infants (5-to 7-mo-olds) to measure changes in brain activity in relation to categorical color differences by using near-infrared spectroscopy (NIRS). We presented two sets of geometric figures to infants: One set altered in color between green and blue, and the other set altered between two different shades of green. We found a significant increase in hemodynamic responses during the between-category alternations, but not during the within-category alternations. These differences in hemodynamic response based on categorical relationship were observed only in the bilateral occipitotemporal regions, and not in the occipital region. We confirmed that categorical color differences yield behavioral differences in infants. We also observed comparable hemodynamic responses to categorical color differences in adults. The present study provided the first evidence, to our knowledge, that colors of different categories are represented differently in the visual cortex of prelinguistic infants, which implies that color categories may develop independently before language acquisition.H umans can discriminate thousands of colors among continuous color space. However, we use only a handful of color terms to describe colors in our daily communication. From the analyses of data for the World Color Survey (www1.ICSI.Berkeley. EDU/wcs/), a corpus of color-naming data from 110 universal languages, many studies have revealed that particular structures of color terms used by speakers exist, and that these structures possess some common features (1, 2). Furthermore, these common features had been found even in the color perception of infants before the acquisition of the color terms (3-5). These results imply that categorical color perception may have some biological basis across cultures and languages. On the other hand, one argument for categorical color perception is that the color lexicon changes perceptual differences among colors so that colors from the same linguistic category appear much closer than colors of different categories (6, 7). A possible hypothesis is that categorical color perception has an innate perceptual foundation, and then could be modified along with the acquisition of language (8).A recent set of studies focusing on hemispheric asymmetries in categorical color perception has added another perspective to this hypothesis. Gilbert et al. (9) found that the reaction time for detecting a colored target among differently colored distractors was faster when the target and distractors belonged to different categories than when they belonged to the same category. They named this phenomenon the color-category effect, and reported that this category effect is evident only when the target was in the ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.