The limited capacity of visual working memory (VWM) requires the existence of an efficient information selection mechanism. While it has been shown that under low VWM load, an irrelevant simple feature can be processed, its fate under high load (e.g., six objects) remains unclear. We explored this issue by probing the "irrelevant-change distracting effect," in which the change of a stored irrelevant feature affects performance. Simple colored shapes were used as stimuli, with color as the target. Using a whole-probe method (presenting six objects in both the memory and test arrays), in Experiment 1 we found that a change to one of the six shapes led to a significant distracting effect. Using a partial-probe method (presenting the probe either at the screen center or at a location selected from the memory array), in Experiment 2 we showed the distracting effect again. These results suggest that irrelevant simple features can be stored into VWM, regardless of memory load.
Traditionally, objects of attention are characterized either as full-fledged entities or either as elements grouped by Gestalt principles. Because humans appear to use social groups as units to explain social activities, we proposed that a socially defined group, according to social interaction information, would also be a possible object of attentional selection. This hypothesis was examined using displays with and without handshaking interactions. Results demonstrated that object-based attention, which was measured by an object-specific attentional advantage (i.e., shorter response times to targets on a single object), was extended to two hands performing a handshake but not to hands that did not perform meaningful social interactions, even when they did perform handshake-like actions. This finding cannot be attributed to the familiarity of the frequent co-occurrence of two handshaking hands. Hence, object-based attention can select a grouped object whose parts are connected within a meaningful social interaction. This finding implies that object-based attention is constrained by top-down information.
Although our world is hierarchically organized, the perception, attention, and memory of hierarchical structures remain largely unknown. The current study shows how a hierarchical motion representation enhances the inference of an object's position in a dynamic display. The motion hierarchy is formed as an acyclic tree in which each node represents a distinctive motion component. Each individual object is instantiated as a node in the tree. In a position inference task, participants were asked to infer the position of a target object, given how it moved jointly with other objects. The results showed that the inference is supported by the context formed by nontarget objects. More importantly, this contextual effect is (a) structured, with stronger support from objects forming a hierarchical tree than from those moving independently; (b) degreed, with stronger support from objects closer to the target in the motion tree; and (c) directed, with stronger support from the target's ancestor nodes than from its descendent nodes. Computational modeling results further indicated that the contextual effect cannot be explained by correlated and contingent movements without an explicit causal representation of the motion hierarchy. Together, these studies suggest that human vision is a type of intelligence, which sees what are in the dynamic displays by recovering why and how they are generated. (PsycINFO Database Record
Human vision supports social perception by efficiently detecting agents and extracting rich information about their actions, goals, and intentions. Here, we explore the cognitive architecture of perceived animacy by constructing Bayesian models that integrate domain‐specific hypotheses of social agency with domain‐general cognitive constraints on sensory, memory, and attentional processing. Our model posits that perceived animacy combines a bottom–up, feature‐based, parallel search for goal‐directed movements with a top–down selection process for intent inference. The interaction of these architecturally distinct processes makes perceived animacy fast, flexible, and yet cognitively efficient. In the context of chasing, in which a predator (the “wolf”) pursues a prey (the “sheep”), our model addresses the computational challenge of identifying target agents among varying numbers of distractor objects, despite a quadratic increase in the number of possible interactions as more objects appear in a scene. By comparing modeling results with human psychophysics in several studies, we show that the effectiveness and efficiency of human perceived animacy can be explained by a Bayesian ideal observer model with realistic cognitive constraints. These results provide an understanding of perceived animacy at the algorithmic level—how it is achieved by cognitive mechanisms such as attention and working memory, and how it can be integrated with higher‐level reasoning about social agency.
Understanding actions plays an impressive role in our social life. Such processing has been suggested to be reflected by EEG Mu rhythm (8–13 Hz in sensorimotor regions). However, it remains unclear whether Mu rhythm is modulated by the social nature of coordination information in interactive actions (i.e., inter-dependency). This study used a novel manipulation of social coordination information: in a computer-based task, participants viewed a replay of two chasers chasing a common target coordinately (coordinated chase) or independently (solo chase). Simultaneously, to distinguish the potential effect of social coordination information from that of object-directed goal information, a control version of each condition was created by randomizing one chaser’s movement. In a second experiment, we made the target invisible to participants to control for low-level properties. Watching replays of coordinated chases induced stronger Mu suppression than solo chases, although both involved a common target. These effects were not explained by attention mechanisms or low-level physical patterns (e.g., the degree of physical synchronization). Therefore, the current findings suggest that processing social coordination information can be reflected by Mu rhythm. This function of Mu rhythm may characterize the activity of human mirror neuron system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.