It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.
Autism Spectrum Disorder (ASD), Oppositional Defiant Disorder (ODD), and Conduct Disorder (CD) are often associated with emotion recognition difficulties. This is the first eye-tracking study to examine emotional face recognition (i.e., gazing behavior) in a direct comparison of male adolescents with Autism Spectrum Disorder or Oppositional Defiant Disorder/Conduct Disorder, and typically developing (TD) individuals. We also investigate the role of psychopathic traits, callous–unemotional (CU) traits, and subtypes of aggressive behavior in emotional face recognition. A total of 122 male adolescents (N = 50 ASD, N = 44 ODD/CD, and N = 28 TD) aged 12–19 years (M = 15.4 years, SD= 1.9) were included in the current study for the eye-tracking experiment. Participants were presented with neutral and emotional faces using a Tobii 1750 eye-tracking monitor to record gaze behavior. Our main dependent eye-tracking variables were: (1) fixation duration to the eyes of a face and (2) time to the first fixation to the eyes. Since distributions of eye-tracking variables were not completely Gaussian, non-parametric tests were chosen to investigate gaze behavior across the diagnostic groups with Autism Spectrum Disorder, Oppositional Defiant Disorder/Conduct Disorder, and Typically Developing individuals. Furthermore, we used Spearman correlations to investigate the links with psychopathy, callous, and unemotional traits and subtypes of aggression as assessed by questionnaires. The relative total fixation duration to the eyes was decreased in both the Autism Spectrum Disorder group and the Oppositional Defiant Disorder/Conduct Disorder group for several emotional expressions. In both the Autism Spectrum Disorder and the Oppositional Defiant Disorder/Conduct Disorder group, increased time to first fixation on the eyes of fearful faces only was nominally significant. The time to first fixation on the eyes was nominally correlated with psychopathic traits and proactive aggression. The current findings do not support strong claims for differential cross-disorder eye-gazing deficits and for a role of shared underlying psychopathic traits, callous–unemotional traits, and aggression subtypes. Our data provide valuable and novel insights into gaze timing distributions when looking at the eyes of a fearful face.Electronic supplementary materialThe online version of this article (10.1007/s00787-018-1174-4) contains supplementary material, which is available to authorized users.
In most visuomotor tasks in which subjects have to reach to visual targets or move the hand along a particular trajectory, eye movements have been shown to lead hand movements. Because the dynamics of vergence eye movements is different from that of smooth pursuit and saccades, we have investigated the lead time of gaze relative to the hand for the depth component (vergence) and in the frontal plane (smooth pursuit and saccades) in a tracking task and in a tracing task in which human subjects were instructed to move the finger along a 3D path. For tracking, gaze leads finger position on average by 28 Ϯ 6 ms (mean Ϯ SE) for the components in the frontal plane but lags finger position by 95 Ϯ 39 ms for the depth dimension. For tracing, gaze leads finger position by 151 Ϯ 36 ms for the depth dimension. For the frontal plane, the mean lead time of gaze relative to the hand is 287 Ϯ 13 ms. However, we found that the lead time in the frontal plane was inversely related to the tangential velocity of finger. This inverse relation for movements in the frontal plane could be explained by assuming that gaze leads the finger by a constant distance of ϳ2.6 cm (range of 1.5-3.6 cm across subjects).
In haptic exploration, when running a fingertip along a surface, the control system may attempt to anticipate upcoming changes in curvature in order to maintain a consistent level of contact force. Such predictive mechanisms are well known in the visual system, but have yet to be studied in the somatosensory system. Thus the present experiment was designed to reveal human capabilities for different types of haptic prediction. A robot arm with a large 3D workspace was attached to the index fingertip and was programmed to produce virtual surfaces with curvatures that varied within and across trials. With eyes closed, subjects moved the fingertip around elliptical hoops with flattened regions or Limaçon shapes, where the curvature varied continuously. Subjects anticipated the corner of the flattened region rather poorly, but for the Limaçon shapes they varied finger speed with upcoming curvature according to the two-thirds power law. Furthermore, although the Limaçon shapes were randomly presented in various 3D orientations, modulation of contact force also indicated good anticipation of upcoming changes in curvature. The results demonstrate that it is difficult to haptically anticipate the spatial location of an abrupt change in curvature, but smooth changes in curvature may be facilitated by anticipatory predictions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.