The limited capacity of visual working memory (VWM) requires the existence of an efficient information selection mechanism. While it has been shown that under low VWM load, an irrelevant simple feature can be processed, its fate under high load (e.g., six objects) remains unclear. We explored this issue by probing the "irrelevant-change distracting effect," in which the change of a stored irrelevant feature affects performance. Simple colored shapes were used as stimuli, with color as the target. Using a whole-probe method (presenting six objects in both the memory and test arrays), in Experiment 1 we found that a change to one of the six shapes led to a significant distracting effect. Using a partial-probe method (presenting the probe either at the screen center or at a location selected from the memory array), in Experiment 2 we showed the distracting effect again. These results suggest that irrelevant simple features can be stored into VWM, regardless of memory load.
Traditionally, objects of attention are characterized either as full-fledged entities or either as elements grouped by Gestalt principles. Because humans appear to use social groups as units to explain social activities, we proposed that a socially defined group, according to social interaction information, would also be a possible object of attentional selection. This hypothesis was examined using displays with and without handshaking interactions. Results demonstrated that object-based attention, which was measured by an object-specific attentional advantage (i.e., shorter response times to targets on a single object), was extended to two hands performing a handshake but not to hands that did not perform meaningful social interactions, even when they did perform handshake-like actions. This finding cannot be attributed to the familiarity of the frequent co-occurrence of two handshaking hands. Hence, object-based attention can select a grouped object whose parts are connected within a meaningful social interaction. This finding implies that object-based attention is constrained by top-down information.
Although our world is hierarchically organized, the perception, attention, and memory of hierarchical structures remain largely unknown. The current study shows how a hierarchical motion representation enhances the inference of an object's position in a dynamic display. The motion hierarchy is formed as an acyclic tree in which each node represents a distinctive motion component. Each individual object is instantiated as a node in the tree. In a position inference task, participants were asked to infer the position of a target object, given how it moved jointly with other objects. The results showed that the inference is supported by the context formed by nontarget objects. More importantly, this contextual effect is (a) structured, with stronger support from objects forming a hierarchical tree than from those moving independently; (b) degreed, with stronger support from objects closer to the target in the motion tree; and (c) directed, with stronger support from the target's ancestor nodes than from its descendent nodes. Computational modeling results further indicated that the contextual effect cannot be explained by correlated and contingent movements without an explicit causal representation of the motion hierarchy. Together, these studies suggest that human vision is a type of intelligence, which sees what are in the dynamic displays by recovering why and how they are generated. (PsycINFO Database Record
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.