A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported.
Is there a universal hierarchy of the senses, such that some senses (e.g., vision) are more accessible to consciousness and linguistic description than others (e.g., smell)? The long-standing presumption in Western thought has been that vision and audition are more objective than the other senses, serving as the basis of knowledge and understanding, whereas touch, taste, and smell are crude and of little value. This predicts that humans ought to be better at communicating about sight and hearing than the other senses, and decades of work based on English and related languages certainly suggests this is true. However, how well does this reflect the diversity of languages and communities worldwide? To test whether there is a universal hierarchy of the senses, stimuli from the five basic senses were used to elicit descriptions in 20 diverse languages, including 3 unrelated sign languages. We found that languages differ fundamentally in which sensory domains they linguistically code systematically, and how they do so. The tendency for better coding in some domains can be explained in part by cultural preoccupations. Although languages seem free to elaborate specific sensory domains, some general tendencies emerge: for example, with some exceptions, smell is poorly coded. The surprise is that, despite the gradual phylogenetic accumulation of the senses, and the imbalances in the neural tissue dedicated to them, no single hierarchy of the senses imposes itself upon language.
Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study (N=1,359) of both unimodal and bimodal interpreters that confirmed these preferences. The L1 to L2 direction preference was stronger for novice than expert bimodal interpreters, while novice and expert unimodal interpreters did not differ from each other. The results indicated that the different direction preferences for bimodal and unimodal interpreters cannot be explained by language production–comprehension asymmetries or by work or training experiences. We suggest that modality and language-specific features of signed languages drive the directionality preferences of bimodal interpreters. Specifically, we propose that fingerspelling, transcoding (literal word-for-word translation), self-monitoring, and consumers’ linguistic variation influence the preference of bimodal interpreters for working into their L2.
Among spoken language interpreters, a long-standing question regarding directionality is whether interpretations are better when working into one’s native language (L1) or into one’s ‘active’ non-native language (L2). In contrast to studies that support working into L1, signed language interpreters report a preference for working into L2. Accordingly, we investigated whether signed language interpreters actually perform better when interpreting into their L2 (American Sign Language) or into their L1 (English). Interpretations by 30 interpreters (15 novice, 15 expert), delivered under experimental conditions, were assessed on accuracy (semantic content) and articulation quality (flow, speed, and prosody). For both measures, novices scored significantly better when interpreting into English (L1); experts were equally accurate, and showed similar articulation quality, in both directions. The results for the novice interpreters support the hypothesis that the difficulty of L2 production drives interpreting performance in relation to directionality. Findings also indicate a disconnect between direction preference and interpreting performance. Novices’ perception of their ASL production ability may be distorted because they can default to fingerspelling and transcoding. Weakness in self-monitoring of signing may also lead novices to overrate their ASL skills. Interpreter educators should stress misperceptions of signing proficiency that arise from available, but inappropriate, strategies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.