Studies on bilingual word reading and translation have examined the effects of lexical variables (e.g., concreteness, cognate status) by comparing groups of non-translators with varying levels of L2 proficiency. However, little attention has been paid to another relevant factor: translation expertise (TI). To explore this issue, we administered word reading and translation tasks to two groups of non-translators possessing different levels of informal TI (Experiment 1), and to three groups of bilinguals possessing different levels of translation training (Experiment 2). Reaction-time recordings showed that in all groups reading was faster than translation and unaffected by concreteness and cognate effects. Conversely, in both experiments, all groups translated concrete and cognate words faster than abstract and non-cognate words, respectively. Notably, an advantage of backward over forward translation was observed only for low-proficiency non-translators (in Experiment 1). Also, in Experiment 2, the modifications induced by translation expertise were more marked in the early than in the late stages of training and practice. The results suggest that TI contributes to modulating inter-equivalent connections in bilingual memory.
While influential works since the 1970s have widely assumed that imitation is an innate skill in both human and non-human primate neonates, recent empirical studies and meta-analyses have challenged this view, indicating other forms of reward-based learning as relevant factors in the development of social behavior. The visual input translation into matching motor output that underlies imitation abilities instead seems to develop along with social interactions and sensorimotor experience during infancy and childhood. Recently, a new visual stream has been identified in both human and non-human primate brains, updating the dual visual stream model. This third pathway is thought to be specialized for dynamics aspects of social perceptions such as eye-gaze, facial expression and crucially for audio-visual integration of speech. Here, we review empirical studies addressing an understudied but crucial aspect of speech and communication, namely the processing of visual orofacial cues (i.e., the perception of a speaker’s lips and tongue movements) and its integration with vocal auditory cues. Along this review, we offer new insights from our understanding of speech as the product of evolution and development of a rhythmic and multimodal organization of sensorimotor brain networks, supporting volitional motor control of the upper vocal tract and audio-visual voices-faces integration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.