Proprioceptive development relies on a variety of sensory inputs, among which vision is hugely dominant. Focusing on the developmental trajectory underpinning the integration of vision and proprioception, the present research explores how this integration is involved in interactions with Immersive Virtual Reality (IVR) by examining how proprioceptive accuracy is affected by Age, Perception, and Environment. Individuals from 4 to 43 years old completed a self-turning task which asked them to manually return to a previous location with different sensory modalities available in both IVR and reality. Results were interpreted from an exploratory perspective using Bayesian model comparison analysis, which allows the phenomena to be described using probabilistic statements rather than simplified reject/notreject decisions. The most plausible model showed that 4-8-year-old children can generally be expected to make more proprioceptive errors than older children and adults. Across age groups, proprioceptive accuracy is higher when vision is available, and is disrupted in the visual environment provided by the IVR headset. We can conclude that proprioceptive accuracy mostly develops during the first eight years of life and that it relies largely on vision. Moreover, our findings indicate that this proprioceptive accuracy can be disrupted by the use of an IVR headset.
When learning and interacting with the world, people with Autism Spectrum Disorders (ASD) show compromised use of vision and enhanced reliance on body-based information. As this atypical profile is associated with motor and social difficulties, interventions could aim to reduce the potentially isolating reliance on the body and foster the use of visual information. To this end, head-mounted displays (HMDs) have unique features that enable the design of Immersive Virtual Realities (IVR) for manipulating and training sensorimotor processing. The present study assesses feasibility and offers some early insights from a new paradigm for exploring how children and adults with ASD interact with Reality and IVR when vision and proprioception are manipulated. Seven participants (five adults, two children) performed a self-turn task in two environments (Reality and IVR) for each of three sensory conditions (Only Proprioception, Only Vision, Vision + Proprioception) in a purpose-designed testing room and an HMD-simulated environment. The pilot indicates good feasibility of the paradigm. Preliminary data visualisation suggests the importance of considering inter-individual variability. The participants in this study who performed worse with Only Vision and better with Only Proprioception seemed to benefit from the use of IVR. Those who performed better with Only Vision and worse with Only Proprioception seemed to benefit from Reality. Therefore, we invite researchers and clinicians to consider that IVR may facilitate or impair individuals depending on their profiles.
Proprioceptive development relies on a variety of sensory inputs, among which vision is hugely dominant. Focusing on the developmental trajectory underpinning the integration of vision and proprioception, the present research explores how this integration is involved in interactions with Immersive Virtual Reality (IVR) by examining how proprioceptive accuracy is affected by age, perception, and environment.Individuals from 4 to 43 years old completed a self-turning task which asked them to manually return to a previous location with different sensory modalities available in both IVR and reality. Results were interpreted from an exploratory perspective using Bayesian model comparison analysis, which allows the phenomena to be described using probabilistic statements rather than simplified reject/not-reject decisions. The most plausible model showed that 4-8-year-old children can generally be expected to make more proprioceptive errors than older children and adults. Across age groups, proprioceptive accuracy is higher when vision is available, and is disrupted in the visual environment provided by the IVR headset. We can conclude that proprioceptive accuracy mostly develops during the first eight years of life and that it relies largely on vision. Moreover, our findings indicate that this proprioceptive accuracy can be disrupted by the use of an IVR headset.From the intrauterine life, our physical, psychological, and social development 2 progresses thanks to the interaction between our genetic profile and the environment. 3 Sensory information from the both external world (exteroception) and the self 4 (interoception) is detected by our emerging sensory functions. We talk about 5 exteroception when the sensory information comes from the environment around us (e.g. 6 sight, hearing, touch), while interoception is the perception of our body and includes 7 "temperature, pain, itch, tickle, sensual touch, muscular and visceral sensations, 8 vasomotor flush, hunger, thirst" (p. 655 [1]). This information, which comes from 9 different complementary sensory modalities, has to be integrated so that we can interact 10 with and learn from the environment. The multisensory integration that follows takes 11 August 16, 2019 2/38 time to develop and emerges in a heterochronous pattern: we rely on the various 12 sensory modalities to different degrees at different points in the human developmental 13 trajectory, during which the sensory modalities interact in different ways [2]. In general, 14 our sensory development is driven by crossmodal calibration: one accurate sensory 15 modality can improve performance based on information delivered by another, less 16 accurate, sensory modality [3-5]. 17 Proprioception: an emergent perception arising from a 18 multisensory process 19 Both exteroception and interoception drive our discovery of the external world and the 20 self. One important physical dimension of the concept of self is proprioception, whose 21 definition is particularly complex and debated in the extant literature. Propr...
Atypical sensorimotor developmental trajectories greatly contribute to the profound heterogeneity that characterizes Autism Spectrum Disorders (ASD). Individuals with ASD manifest deviations in sensorimotor processing with early markers in the use of sensory information coming from both the external world and the body, as well as motor difficulties. The cascading effect of these impairments on the later development of higher-order abilities (e.g., executive functions and social communication) underlines the need for interventions that focus on the remediation of sensorimotor integration skills. One of the promising technologies for such stimulation is Immersive Virtual Reality (IVR). In particular, head-mounted displays (HMDs) have unique features that fully immerse the user in virtual realities which disintegrate and otherwise manipulate multimodal information. The contribution of each individual sensory input and of multisensory integration to perception and motion can be evaluated and addressed according to a user’s clinical needs. HMDs can therefore be used to create virtual environments aimed at improving people’s sensorimotor functioning, with strong potential for individualization for users. Here we provide a narrative review of the sensorimotor atypicalities evidenced by children and adults with ASD, alongside some specific relevant features of IVR technology. We discuss how individuals with ASD may interact differently with IVR versus real environments on the basis of their specific atypical sensorimotor profiles and describe the unique potential of HMD-delivered immersive virtual environments to this end.
Can cognitive load enhance concentration on task-relevant information and help filter out distractors? Most of the prior research in the area of selective attention has focused on visual attention or cross-modal distraction and has yielded controversial results. Here, we studied whether working memory load can facilitate selective attention when both target and distractor stimuli are auditory. We used a letter n-back task with four levels of working memory load and two levels of distraction: congruent and incongruent distractors. This combination of updating and inhibition tasks allowed us to manipulate working memory load within the selective attention task. Participants sat in front of three loudspeakers and were asked to attend to the letter presented from the central loudspeaker while ignoring that presented from the flanking ones (spoken by a different person), which could be the same letter as the central one (congruent) or a different (incongruent) letter. Their task was to respond whether or not the central letter matched the letter presented n (0, 1, 2, or 3) trials back. Distraction was measured in terms of the difference in reaction time and accuracy on trials with incongruent versus congruent flankers. We found reduced interference from incongruent flankers in 2- and 3-back conditions compared to 0- and 1-back conditions, whereby higher working memory load almost negated the effect of incongruent flankers. These results suggest that high load on verbal working memory can facilitate inhibition of distractors in the auditory domain rather than make it more difficult as sometimes claimed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.