Braille reading is a demanding task that requires the identification of rapidly varying tactile patterns. During proficient reading, neighboring characters impact the fingertip at ϳ100 ms intervals, and adjacent raised dots within a character at 50 ms intervals. Because the brain requires time to interpret afferent sensorineural activity, among other reasons, tactile stimuli separated by such short temporal intervals pose a challenge to perception. How, then, do proficient Braille readers successfully interpret inputs arising from their fingertips at such rapid rates? We hypothesized that somatosensory perceptual consolidation occurs more rapidly in proficient Braille readers. If so, Braille readers should outperform sighted participants on masking tasks, which demand rapid perceptual processing, but would not necessarily outperform the sighted on tests of simple vibrotactile sensitivity. To investigate, we conducted two-interval forced-choice vibrotactile detection, amplitude discrimination, and masking tasks on the index fingertips of 89 sighted and 57 profoundly blind humans. Sighted and blind participants had similar unmasked detection (25 ms target tap) and amplitude discrimination (compared with 100 m reference tap) thresholds, but congenitally blind Braille readers, the fastest readers among the blind participants, exhibited significantly less masking than the sighted (masker, 50 Hz, 50 m; target-masker delays, Ϯ50 and Ϯ100 ms). Indeed, Braille reading speed correlated significantly and specifically with masking task performance, and in particular with the backward masking decay time constant. We conclude that vibrotactile sensitivity is unchanged but that perceptual processing is accelerated in congenitally blind Braille readers.
Sensory environments are commonly characterized by specific physical features, which sensory systems might exploit using dedicated processing mechanisms. In the tactile sense, one such characteristic feature is frictional movement, which gives rise to short-lasting (<10 ms), information-carrying integument vibrations. Rather than generic integrative encoding (i.e., averaging or spectral analysis capturing the “intensity” and “best frequency”), the tactile system might benefit from, what we call a “temporally local” coding scheme that instantaneously detects and analyzes shapes of these short-lasting features. Here, by employing analytic psychophysical measurements, we tested whether the prerequisite of temporally local coding exists in the human tactile system. We employed pulsatile skin indentations at the fingertip that allowed us to trade manipulation of local pulse shape against changes in global intensity and frequency, achieved by adding pulses of the same shape. We found that manipulation of local pulse shape has strong effects on psychophysical performance, arguing for the notion that humans implement a temporally local coding scheme for perceptual decisions. As we found distinct differences in performance using different kinematic layouts of pulses, we inquired whether temporally local coding is tuned to a unique kinematic variable. This was not the case, since we observed different preferred kinematic variables in different ranges of pulse shapes. Using an established encoding model for primary afferences and indentation stimuli, we were able to demonstrate that the found kinematic preferences in human performance, may well be explained by the response characteristics of Pacinian corpuscles (PCs), a class of human tactile primary afferents.
The utilization of visual information for the control of ongoing voluntary limb movements has been investigated for more than a century. Recently, online sensorimotor processes for the control of upper-limb reaches were hypothesized to include a distinct process related to the comparison of limb and target positions (i.e., limb-target regulation processes: Elliott et al. in Psychol Bull 136:1023-1044. doi: 10.1037/a0020958 , 2010). In the current study, this hypothesis was tested by presenting participants with brief windows of vision (20 ms) when the real-time velocity of the reaching limb rose above selected velocity criteria. One experiment tested the perceptual judgments of endpoint bias (i.e., under- vs. over-shoot), and another experiment tested the shifts in endpoint distributions following an imperceptible target jump. Both experiments revealed that limb-target regulation processes take place at an optimal velocity or "sweet spot" between movement onset and peak limb velocity (i.e., 1.0 m/s with the employed movement amplitude and duration). In contrast with pseudo-continuous models of online control (e.g., Elliott et al. in Hum Mov Sci 10:393-418. doi: 10.1016/0167-9457(91)90013-N , 1991), humans likely optimize online limb-target regulation processes by gathering visual information at a rather limited period of time, well in advance of peak limb velocity.
The efficiency of online visuomotor processes was investigated by manipulating vision based on real-time upper limb velocity. Participants completed rapid reaches under two control (full vision, no vision) and three experimental visual window conditions. The experimental visual windows were early: 0.8-1.4 m/s, middle: above 1.4 m/s, and late: 1.4 to 0.8 m/s. The results indicated that endpoint consistency comparable to that of full-vision trials was observed when using vision from the early (43 ms) and middle (89 ms) windows, but vision from the middle window entailed a longer deceleration phase (i.e., a temporal cost). The late window was not useful to implement online trajectory amendments. This study provides further support for the idea of early visuomotor control, which may involve multiple online control processes during voluntary movement.
BACKGROUND: Robotic guidance has been shown to facilitate motor skill acquisition, through altered sensorimotor control, in neurologically impaired and healthy populations. OBJECTIVE: To determine if robot-guided practice and online visual feedback availability primarily influences movement planning or online control mechanisms. METHODS: In this two-experiment study, participants first performed a pre-test involving reaches with or without vision, to obtain baseline measures. In both experiments, participants then underwent an acquisition phase where they either actively followed robot-guided trajectories or trained unassisted. Only in the second experiment, robot-guided or unassisted acquisition was performed either with or without online vision. Following acquisition, all participants completed a post-test that was the same as the pre-test. Planning and online control mechanisms were assessed through endpoint error and kinematic analyses. RESULTS: The robot-guided and unassisted groups generally exhibited comparable changes in endpoint accuracy and precision. Kinematic analyses revealed that only participants who practiced with the robot exhibited significantly reduced the proportion of movement time spent during the limb deceleration phase (i.e., time after peak velocity). This was true regardless of online visual feedback availability during training. CONCLUSION: The influence of robot-assisted motor skill acquisition is best explained by improved motor planning processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.