Daily life visuomotor activities, associated with prism exposure, are a useful tool for rehabilitating USN patients. This new treatment may widen the compliance with prism exposure treatments and their feasibility within home-based programs.
Order of Authors: Cristina Cacciari, PhD; Nadia Bolognini, PhD; Irene Senna; Maria Concetta Pellicciari, PhD; Carlo Miniussi, PhD; Costanza Papagno Abstract: We used Transcranial Magnetic Stimulation (TMS) to assess whether reading literal, nonliteral (i.e., metaphorical, idiomatic) and fictive motion sentences modulates the activity of the motor system. Sentences were divided into three segments visually presented one at a time: the noun phrase, the verb and the final part of the sentence. Single pulse-TMS was delivered at the end of the sentence over the leg motor area in the left hemisphere and motor evoked potentials (MEPs) were recorded from the right gastrocnemius and tibialis anterior muscles. MEPs were larger when participants were presented with literal, fictive and metaphorical motion sentences than with idiomatic motion or mental sentences. These results suggest that the excitability of the motor system modulated by the motor component of the verb is preserved in fictive and metaphorical motion sentences.Dear Editor, Thank you very much for accepting our manuscript. We have answered all the remaining points raised by Rev #1 and we are attaching in a separate file our responses.We hope that our manuscript is now suitable for publication in Brain and Language Sincerely, Costanza Papagno, PhD, MD Milano, May 9 th 2011 Cover LetterThis study adds evidence in the debate concerning the role of the primary motor cortex in the comprehension of motion verbs, showing that the motor component of the verb is preserved in fictive and metaphorical sentences, while it is not when motion verbs are used in idiomatic contexts. Res.: we have briefly summarized the pilot experiment in which we found MEP activation effects for idiomatic sentences when TMS was delivered immediately after the verb, when the subject was animate. We have added the following paragraph:We performed a previous experiment on eight healthy participants (six females, mean age 29±3 years; mean education 17±1 years; Handedness Inventory test mean score 97.4%) using the same material and procedure, except that TMS was delivered immediately after the second segment, so that participants only read the noun phrase and the (motion or mental) verb before receiving TMS. This means that up to that point participants were unaware of the literal, metaphorical or idiomatic nature of the sentences that only emerged afterwards (with the exception of sentences with an inanimate subject). In this experiment the effect of the sentence type on motor cortical excitability was evaluated using the MEP changes expressed in terms of the ratio (∆) between motion and mental (i.e., control) sentences. A repeated measure ANOVA with Sentence type (literal, fictive, idiomatic or metaphorical motion) as within-subject factor was used. The analysis did not show any significant effect on the GCM muscle [F (3, 21) SummaryWe used Transcranial Magnetic Stimulation (TMS) to assess whether reading literal, nonliteral (i.e., metaphorical, idiomatic) and fictive motion sentences modul...
Figure 1: Left: The basic virtual mirror scenario consists of an empty room and a simplistic mirror avatar. Right: The extended scenario employed in the experiment, where the target movement is shown by a semi-transparent blue "ghost character". AbstractLatency between a user's movement and visual feedback is inevitable in every Virtual Reality application, as signal transmission and processing take time. Unfortunately, a high end-to-end latency impairs perception and motor performance. While it is possible to reduce feedback delay to tens of milliseconds, these delays will never completely vanish. Currently, there is a gap in literature regarding the impact of feedback delays on perception and motor performance as well as on their interplay in virtual environments employing full-body avatars. With the present study at hand, we address this gap by performing a systematic investigation of different levels of delay across a variety of perceptual and motor tasks during full-body action inside a Cave Automatic Virtual Environment. We presented participants with their virtual mirror image, which responded to their actions with feedback delays ranging from 45 to 350 ms. We measured the impact of these delays on motor performance, sense of agency, sense of body ownership and simultaneity perception by means of psychophysical procedures. Furthermore, we looked at interaction effects between these aspects to identify possible dependencies. The results show that motor performance and simultaneity perception are affected by latencies above 75 ms. Although sense of agency and body ownership only decline at a latency higher than 125 ms, and deteriorate for a latency greater than 300 ms, they do not break down completely even at the highest tested delay. Interestingly, participants perceptually infer the presence of delays more from their motor error in the task than from the actual level of delay. Whether or not participants notice a delay in a virtual environment might therefore depend on the motor task and their performance rather than on the actual delay.
Our body is made of flesh and bones. We know it, and in our daily lives all the senses constantly provide converging information about this simple, factual truth. But is this always the case? Here we report a surprising bodily illusion demonstrating that humans rapidly update their assumptions about the material qualities of their body, based on their recent multisensory perceptual experience. To induce a misperception of the material properties of the hand, we repeatedly gently hit participants' hand with a small hammer, while progressively replacing the natural sound of the hammer against the skin with the sound of a hammer hitting a piece of marble. After five minutes, the hand started feeling stiffer, heavier, harder, less sensitive, unnatural, and showed enhanced Galvanic skin response (GSR) to threatening stimuli. Notably, such a change in skin conductivity positively correlated with changes in perceived hand stiffness. Conversely, when hammer hits and impact sounds were temporally uncorrelated, participants did not spontaneously report any changes in the perceived properties of the hand, nor did they show any modulation in GSR. In two further experiments, we ruled out that mere audio-tactile synchrony is the causal factor triggering the illusion, further demonstrating the key role of material information conveyed by impact sounds in modulating the perceived material properties of the hand. This novel bodily illusion, the ‘Marble-Hand Illusion', demonstrates that the perceived material of our body, surely the most stable attribute of our bodily self, can be quickly updated through multisensory integration.
Multisensory integration of information from different sensory modalities is an essential component of perception. Neurophysiological studies have revealed that audio-visual interactions occur early in time and even within sensory cortical areas believed to be modality-specific. Here we investigated the effect of auditory stimuli on visual perception of phosphenes induced by transcranial magnetic stimulation (TMS) delivered to the occipital visual cortex. TMS applied at subthreshold intensity led to the perception of phosphenes when coupled with an auditory stimulus presented within close spatiotemporal congruency at the expected retinotopic location of the phosphene percept. The effect was maximal when the auditory stimulus preceded the occipital TMS pulse by 40 ms. Follow-up experiments confirmed a high degree of temporal and spatial specificity of this facilitatory effect. Furthermore, audiovisual facilitation was only present at subthreshold TMS intensity for the phosphenes, suggesting that suboptimal levels of excitability within unisensory cortices may be better suited for enhanced cross-modal interactions. Overall, our findings reveal early auditory–visual interactions due to the enhancement of visual cortical excitability by auditory stimuli. These interactions may reflect an underlying anatomical connectivity between unisensory cortices.
Development of multisensory integration following prolonged early-onset visual deprivationHighlights d Congenitally blind individuals regaining sight late acquire multisensory integration abilities d The ability to integrate vision and touch develops quickly after surgery d Some individuals reach optimal integration levels within years to benefit perception d The development is based on experience and depends on post-surgical visual acuity
Early visual deprivation typically results in spatial impairments in other sensory modalities. It has been suggested that, since vision provides the most accurate spatial information, it is used for calibrating space in the other senses. Here we investigated whether sight restoration after prolonged early onset visual impairment can lead to the development of more accurate auditory space perception. We tested participants who were surgically treated for congenital dense bilateral cataracts several years after birth. In Experiment 1 we assessed participants' ability to understand spatial relationships among sounds, by asking them to spatially bisect three consecutive, laterally separated sounds. Participants performed better after surgery than participants tested before. However, they still performed worse than sighted controls. In Experiment 2, we demonstrated that single sound localization in the two-dimensional frontal plane improves quickly after surgery, approaching performance levels of sighted controls. Such recovery seems to be mediated by visual acuity, as participants gaining higher post-surgical visual acuity performed better in both experiments. These findings provide strong support for the hypothesis that vision calibrates auditory space perception. Importantly, this also demonstrates that this process can occur even when vision is restored after years of visual deprivation.
Perception can often be described as a statistically optimal inference process whereby noisy and incomplete sensory evidence is combined with prior knowledge about natural scene statistics. Previous evidence has shown that humans tend to underestimate the speed of unreliable moving visual stimuli. This finding has been interpreted in terms of a Bayesian prior favoring low speed, given that in natural visual scenes objects are mostly stationary or slowly-moving. Here we investigated whether an analogous tendency to underestimate speed also occurs in audition: even if the statistics of the visual environment seem to favor low speed, the statistics of the stimuli reaching the individual senses may differ across modalities, hence potentially leading to different priors. Here we observed a systematic bias for underestimating the speed of unreliable moving sounds. This finding suggests the existence of a slow-motion prior in audition, analogous to the one previously found in vision. The nervous system might encode the overall statistics of the world, rather than the specific properties of the signals reaching the individual senses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.