It is commonly held that yellow is happy and blue is sad, but the reason remains unclear. Part of the problem is that researchers tend to focus on understanding why yellow is happy and blue is sad, but this may be a misleading characterization of color-emotion associations. In this study, we disentangle the contribution of lightness, chroma, and hue in color-happy/sad associations by controlling for lightness and chroma either statistically or colorimetrically. We found that after controlling for lightness and chroma, colors with blue hue were no sadder than colors with yellow hue, and in some cases, colors with blue hue were actually happier. These results can help guide future efforts to understand the nature of color-emotion associations.
When recognizing objects in our environments, we rely on both what we see and what we know.While elderly adults have been found to display increased sensitivity to top-down influences of contextual information during object recognition, the locus of this increased sensitivity remains unresolved. To address this issue, we examined the effects of aging on the neural dynamics of bottom-up and top-down visual processing during rapid object recognition. Specific EEG ERP components indexing bottom-up and top-down processes along the visual processing stream were assessed while systematically manipulating the degree of object ambiguity and scene context congruity. An increase in early attentional feedback mechanisms (as indexed by N1) as well as a functional reallocation of executive attentional resources (as indexed by P200) prior to object identification were observed in elderly adults, while postperceptual semantic integration (as indexed by N400) remained intact. These findings suggest that compromised bottom-up perceptual processing of visual input in healthy aging leads to an increased involvement of top-down processes to resolve greater perceptual ambiguity during object recognition.Real world objects are rarely seen in the absence of any contextual information. Rather, our day-to-day visual experience includes exposure to multiple types of normative or expected contextual relationships between object and scene, including spatial-physical relations (e.g., a computer monitor is normally on a desk rather than beneath it), semantic associations (e.g., a piano is in the living room rather than the kitchen), relative spatial orientations (e.g., chairs are normally oriented toward the table; cars are oriented towards the driving direction of a street), and scene probability (e.g., a pillow fight happening on the street is less probable than one happening in a bedroom). There is now substantial evidence that contextual information that is consistent with our knowledge, expectation, or visual experience facilitates object recognition
Objective: Accessing semantic representations of real-world objects requires integration of multimodal perceptual features that are represented across relevant neocortical areas. Early Alzheimer’s disease (AD) neuropathology, including neurofibrillary tangles in the perirhinal cortex as well as disrupted cortico-cortical connectivity, would be expected to disrupt the integration of object features. This integration deficit may underlie AD patients’ semantic memory deficits and would be predicted to be more prominent for living objects, which tend to be more defined by sensory features compared with nonliving objects. Method: Two experiments were conducted to assess feature integration in cognitively healthy older adults and patients with amnestic mild cognitive impairment (MCI). In both experiments, pictures of real-world objects were presented in congruent or incongruent colors. Participants were instructed to make a speeded color congruency judgment (Experiment 1) or name the presented surface color (Experiment 2). Results: Across experiments, MCI patients showed a selective integration deficit for living, but not nonliving, objects across both experimental paradigms that was consistent with a deterioration in semantic structural representations rather than a deficit in controlled semantic retrieval. Planned secondary analyses with a subset of patients (Experiment 1) for whom PET imaging was available indicated that the degree of impairment was associated with the magnitude of cortical amyloid burden. Conclusions: These findings suggest that early AD pathology leads to impaired integration of distributed semantic object representations. The development of integration tasks as sensitive markers of early AD pathology may lead to more effective diagnostic tools for early detection and intervention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.