Optimal control models of biological movements are used to account for those internal variables that constrain voluntary goal-directed actions. They, however, do not take into account external environmental constraints as those associated to social intention. We investigated here the effects of the social context on kinematic characteristics of sequential actions consisting in placing an object on an initial pad (preparatory action) before reaching and grasping as fast as possible the object to move it to another location (main action). Reach-to-grasp actions were performed either in an isolated condition or in the presence of a partner (audience effect), located in the near or far space (effect of shared reachable space), and who could intervene on the object in a systematic fashion (effect of social intention effect) or not (effect of social uncertainty). Results showed an absence of audience effect but nevertheless an influence of the social context both on the main and the preparatory actions. In particular, a “localized” effect of shared reachable space was observed on the main action, which was smoother when performed within the reachable space of the partner. Furthermore, a “global” effect of social uncertainty was observed on both actions with faster and jerkier movements. Finally, social intention affected the preparatory action with higher wrist displacements and slower movements when the object was placed for the partner rather than placed for self-use. Overall, these results demonstrate specific effects of action space, social uncertainty and social intention on the planning of reach-to-grasp actions, in particular on the preparatory action, which was performed with no specific execution constraint. These findings underline the importance of considering the social context in optimal models of action control for human–robot interactions, in particular when focusing on the implementation of motor parameters required to afford intuitive interactions.
As social animals, it is crucial to understand others’ intention. But is it possible to detect social intention in two actions that have the exact same motor goal? In the present study, we presented participants with video clips of an individual reaching for and grasping an object to either use it (personal trial) or to give his partner the opportunity to use it (social trial). In Experiment 1, the ability of naïve participants to classify correctly social trials through simple observation of short video clips was tested. In addition, detection levels were analyzed as a function of individual scores in psychological questionnaires of motor imagery, visual imagery, and social cognition. Results revealed that the between-participant heterogeneity in the ability to distinguish social from personal actions was predicted by the social skill abilities. A second experiment was then conducted to assess what predictive mechanism could contribute to the detection of social intention. Video clips were sliced and normalized to control for either the reaction times (RTs) or/and the movement times (MTs) of the grasping action. Tested in a second group of participants, results showed that the detection of social intention relies on the variation of both RT and MT that are implicitly perceived in the grasping action. The ability to use implicitly these motor deviants for action-outcome understanding would be the key to intuitive social interaction.
We present here a toolbox for the real-time motion capture of biological movements that runs in the crossplatform MATLAB environment (The MathWorks, Inc., Natick, MA). It provides instantaneous processing of the 3-D movement coordinates of up to 20 markers at a single instant. Available functions include (1)the setting of reference positions, areas, and trajectories of interest; (2)recording of the 3-D coordinates for each marker over the trial duration; and (3)the detection of events to use as triggers for external reinforcers (e.g., lights, sounds, or odors). Through fast online communication between the hardware controller and RTMocap, automatic trial selection is possible by means of either a preset or an adaptive criterion. Rapid preprocessing of signals is also provided, which includes artifact rejection, filtering, spline interpolation, and averaging. A key example is detailed, and three typical variations are developed (1)to provide a clear understanding of the importance of real-time control for 3-D motion in cognitive sciences and (2) The Real-Time Motion Capture (RTMocap) Toolbox is a MATLAB toolbox dedicated to the instantaneous control and processing of 3-D motion capture data. It was developed to automatically trigger reinforcement sounds during reachand-grasp object-related movements, but it is potentially useful in a wide range of other interactive situations-for instance, when directing voluntary movements to places in space (e.g., triggering a light on when the hand reaches a predefined 3-D position in a room), when performing actions toward stationary or moving objects (e.g., triggering a sound when an object is grasped with the correct body posture), or simply as a way to reinforce social interactions (e.g., turning music on when a child looks at a person by moving the head in the proper direction). The RTMocap Toolbox is created from open-source code, distributed under the GPL license, and freely available for download at http://sites. google.com/site/RTMocap/.The RTMocap Toolbox is mainly intended to work with recordings made with an infrared marker-based optical motion capture system. Such motion capture systems are based on an actively emitting source that pulses infrared light at a very high frequency, which is then reflected by small, usually circular markers attached to the tracked body parts and objects. With each camera capturing the position of the reflective markers in two dimensions (Fig. 1), a network of several cameras can be used to obtain position data in 3-D. The RTMocap Toolbox was developed with the Qualysis motion capture system, but it can be adapted, by means of slight changes in code lines, to other 3-D motion capture systems that provide real-time information in the MATLAB environment-for instance,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.