Social groups across species rapidly self-organize into hierarchies, where members vary in their level of power, influence, skill, or dominance. In this review we explore the nature of social hierarchies and the traits associated with status in both humans and nonhuman primates, and how status varies across development in humans. Our review finds that we can rapidly identify social status based on a wide range of cues. Like monkeys, we tend to use certain cues, like physical strength, to make status judgments, although layered on top of these more primitive perceptual cues are socio-cultural status cues like job titles and educational attainment. One's relative status has profound effects on attention, memory, and social interactions, as well as health and wellness. These effects can be particularly pernicious in children and adolescents. Developmental research on peer groups and social exclusion suggests teenagers may be particularly sensitive to social status information, but research focused specifically on status processing and associated brain areas is very limited. Recent evidence from neuroscience suggests there may be an underlying neural network, including regions involved in executive, emotional, and reward processing, that is sensitive to status information. We conclude with questions for future research as well as stressing the need to expand social neuroscience research on status processing to adolescents.
Social behavior is often shaped by the rich storehouse of biographical information that we hold for other people. In our daily life, we rapidly and flexibly retrieve a host of biographical details about individuals in our social network, which often guide our decisions as we navigate complex social interactions. Even abstract traits associated with an individual, such as their political affiliation, can cue a rich cascade of person-specific knowledge. Here, we asked whether the anterior temporal lobe (ATL) serves as a hub for a distributed neural circuit that represents person knowledge. Fifty participants across two studies learned biographical information about fictitious people in a 2-d training paradigm. On day 3, they retrieved this biographical information while undergoing an fMRI scan. A series of multivariate and connectivity analyses suggest that the ATL stores abstract person identity representations. Moreover, this region coordinates interactions with a distributed network to support the flexible retrieval of person attributes. Together, our results suggest that the ATL is a central hub for representing and retrieving person knowledge.person knowledge | anterior temporal lobe | person identity node | semantic memory | social neuroscience A s social creatures, it is essential that we develop a rich storehouse of knowledge about other members of our social network, such as who they are, how they look and sound, where they live, and what they do for a living. However, little is known about how and where such "person knowledge" is represented, stored, and retrieved in the brain. This inquiry is challenging because person knowledge is highly multimodal and multifaceted, being linked to both abstract features such as personality and social status as well as more concrete features such as eye color; in addition, familiar individuals are associated with detailed episodic and semantic memories (e.g., memories of shared experiences and biographic information) (1, 2). The neural circuit for person knowledge must therefore have the ability to combine multiple sources of information into an abstract representation accessible from multiplicative cues.An influential theory by Burton and Bruce (3) proposes that person recognition is achieved through a hierarchical process that begins with the activation of modality-specific recognition units that selectively respond to the presence of a known face, name, or voice. This information is then sent to an amodal person identity node (PIN) that integrates information from the modality-specific recognition units into a multimodal representation for that individual. Excitation of the PIN ultimately allows the retrieval of personspecific semantic information independently of stimulus modality (4, 5). A similar design is embedded in the "hub-and-spoke" theory of semantic knowledge, which proposes that different features of a concept (such as its color or taste) are distributed throughout the brain (the "spokes") and that a centralized "hub" integrates these features into a cohe...
Episodic memory involves binding together what-where-when associations. In three experiments, we tested the development of memory for such contextual associations in a naturalistic setting. Children searched for toys in two rooms with two different experimenters; each room contained two identical sets of four containers, but arranged differently. A distinct toy was hidden in a distinct container in each room. In Experiment 1, which involved children between 15 and 26 months who were prompted with a very explicit cue (a part of the hidden toy), we found a marked shift in performance with age: while 15- to 20-month-olds concentrated their searches on the two containers that sometimes contained toys, they did not distinguish between them according to context, but 21-26-month-olds did. However, surprisingly, without toy cues, even the youngest children showed a fragile ability to disambiguate the two containers by room context. In Experiment 2, we tested 34- to 40-month-olds and 64- to 72-month-olds without toy cues. The 5-year-olds were nearly perfect, and the 3-year-olds showed a significant preference for the correct container given only the context. In Experiment 3, we filled in the age range, and also investigated the effects of the use of labels (i.e. names of experimenters and rooms) and of familiarization time, in groups of 34- to 40-month-olds, 42- to 48-month-olds, and 50- to 56-month-olds. Neither labels nor familiarization time had an effect. Across experiments, there was regular age-related improvement in context-based memory. Overall, the results suggest that children's episodic memory may undergo an early qualitative change, yet to be precisely characterized, and that continuing increments in the use of contextual cues occur throughout the preschool period. A video abstract of this article can be viewed at https://www.youtube.com/watch?v=DkwEFw0UEz4&list=PLwxXcOKHPC0llAPVcJyW4EtzlA934A2Rz&index=1.
Relational memory is a canonical form of episodic memory known to rely on the hippocampus. Several lines of evidence suggest that relational memory has a developmental trajectory in which it is fragile, inflexible, and error-prone until around 6 years of age, which seems to mirror maturational changes in the morphology of the hippocampus. However, recent findings from Richmond and Nelson (2009) challenge this idea as they provided evidence of adult-like relational memory in 9-month old infants. In this study, the authors measured the eye-movements of infants and showed that they preferentially gazed at correct, as opposed to incorrect, face-scene pairings at test. The goal of the present study was to evaluate the development of relational memory by assessing 4-year-olds using Richmond and Nelson's task and stimuli, but gathering two dependent measures of relational memory: overt response as well as eye-movements. The results show that overall, preferential looking at correct face-scene pairings was at chance; however, preferential looking was observed when the correct face-scene pair was later explicitly identified. Thus, while eye movements do index explicit memory in 4-year-olds, behavioral data are necessary to obtain a full picture of the development of relational memory in childhood.
An emerging body of research has supported the existence of a small face sensitive region in the ventral anterior temporal lobe (ATL), referred to here as the “anterior temporal face area”. The contribution of this region in the greater face-processing network remains poorly understood. The goal of the present study was to test the relative sensitivity of this region to perceptual as well as conceptual information about people and objects. We contrasted the sensitivity of this region to that of two highly-studied face-sensitive regions, the fusiform face area (FFA) and the occipital face area (OFA), as well as a control region in early visual cortex (EVC). Our findings revealed that multivoxel activity patterns in the anterior temporal face area contain information about facial identity, as well as conceptual attributes such as one’s occupation. The sensitivity of this region to the conceptual attributes of people was greater than that of posterior face processing regions. In addition, the anterior temporal face area overlaps with voxels that contain information about the conceptual attributes of concrete objects, supporting a generalized role of the ventral ATLs in the identification and conceptual processing of multiple stimulus classes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.