There is increasing experimental and neuropsychologica l evidence that action selection is directly constrained by perceptual information from objects as well as by more abstract semantic knowledge. To capture this evidence, we develop a new connectionist model of action and name selection from objects-NAM (Naming and Action Model), based on the idea that action selection is determined by convergent input from both visual structural descriptions and abstract semantic knowledge. We show that NAM is able to simulate evidence for a direct route to action selection from both normal subjects (Experiments 1 and 2) and neuropsychologica l patients (Experiments 3-6). The model provides a useful framework for understanding how perceptual knowledge influences action selection.
We demonstrate that right-handed participants make speeded classification responses to pairs of objects that appear in standard co-locations for right-handed actions relative to when they appear in reflected locations. These effects are greater when participants "weight" information for action when deciding if 2 objects are typically used together, compared with deciding if objects typically occur in a given context. The effects are enhanced, and affect both types of decision, when an agent is shown holding the objects. However, the effects are eliminated when the objects are not viewed from the first-person perspective and when words are presented rather than objects. The data suggest that (a) participants are sensitive to whether objects are positioned correctly for their own actions, (b) the position information is coded within an egocentric reference frame, (c) the critical representation involved is visual and not semantic, and (d) the effects are enhanced by a sense of agency. The results can be interpreted within a dual-route framework for action retrieval in which a direct visual route is influenced by affordances for action.
We discuss evidence indicating that human visual attention is strongly modulated by the potential of objects for action. The possibility of action between multiple objects enables the objects to be attended as a single group, and the fit between individual objects in a group and the action that can be performed influences responses to group members. In addition, having a goal state to perform a particular action affects the stimuli that are selected along with the features and area of space that is attended. These effects of action may reflect statistical learning between environmental cues that are linked by action and/or the coupling between perception and action systems in the brain. The data support the argument that visual selection is a flexible process that emerges as a need to prioritize objects for action.
We report three experiments in which name verification responses to either objects (Experiments 1 and 2) or hand movements (Experiment 3) were compared with action decisions, where participants verified whether an object is typically used in the way described by a verbal label. In Experiments 1 and 2, we report that action decisions show more consistent and larger effects of the congruency of either a handgrip or a type of movement than do name verification responses, although there was some effect of the congruency of the handgrip on name verification. In Experiment 3, we demonstrate that the congruency of the object being moved affects both action and name verification responses to hand movements. We discuss the data relative to accounts of how actions and names are accessed by visually presented objects and in relation to work on the information called upon in classification tasks.
Two experiments are reported that use patients with visual extinction to examine how visual attention is influenced by action information in images. In Experiment 1 patients saw images of objects that were either correctly or incorrectly colocated for action, with the objects held by hands that were congruent or incongruent with those used premorbidly by the patients. The images were also shown from a 1st- and 3rd-person perspective. There was an overall reduction in extinction for objects colocated for action. In addition, there was an extra benefit when the objects were held in hands congruent with those used by the patients and when the objects were seen from a 1st-person perspective. This last result fits with an effect of motor simulation, over and above a purely visual effect based on positioning objects correctly for action. Experiment 2 showed that effects of hand congruence could emerge with images depicted from a 3rd-person perspective when patients saw themselves holding the objects. The data indicate 2 effects of action information on extinction: (a) an effect of colocating objects for action, which does not depend on a self-reference frame (a visual effect), and (b) an effect sensitive to object-hand congruence, which does depend on a self-reference frame (a motor-based effect). The self-reference frame is induced when stimuli are viewed from a 1st-person perspective and when an image of the self is seen from a 3rd-person perspective. Both visual and motor-based effects of action information facilitate the spread of attention across objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.