Dyslexia in alphabetic languages has been extensively investigated and suggests a central deficit in orthography to phonology mapping in the left hemisphere. Compared to dyslexia in alphabetic languages, the central deficit for Chinese dyslexia is still unclear. Because of the logographic nature of Chinese characters, some have suggested that Chinese dyslexia should have larger deficits in the semantic system. To investigate this, Chinese children with reading disability (RD) were compared to typically developing (TD) children using functional magnetic resonance imaging (fMRI) on a rhyming judgment task and on a semantic association judgment task. RD children showed less activation for both tasks in right visual (BA18, 19) and left occipito-temporal cortex (BA 37), suggesting a deficit in visuo-orthographic processing. RD children also showed less activation for both tasks in left inferior frontal gyrus (BA44), which additionally showed significant correlations with activation of bilateral visuo-orthographic regions in the RD group, suggesting that the abnormalities in frontal cortex and in posterior visuo-orthographic regions may reflect a deficit in the connection between brain regions. Analyses failed to reveal larger differences between groups for the semantic compared to the rhyming task, suggesting that Chinese dyslexia is similarly impaired in the access to phonology and to semantics from the visual orthography.
It is well known that natural languages share certain aspects of their design. For example, across languages, syllables like blif are preferred to lbif. But whether language universals are myths or mentally active constraints—linguistic or otherwise—remains controversial. To address this question, we used fMRI to investigate brain response to four syllable types, arrayed on their linguistic well-formedness (e.g., blif≻bnif≻bdif≻lbif, where ≻ indicates preference). Results showed that syllable structure monotonically modulated hemodynamic response in Broca's area, and its pattern mirrored participants' behavioral preferences. In contrast, ill-formed syllables did not systematically tax sensorimotor regions—while such syllables engaged primary auditory cortex, they tended to deactivate (rather than engage) articulatory motor regions. The convergence between the cross-linguistic preferences and English participants' hemodynamic and behavioral responses is remarkable given that most of these syllables are unattested in their language. We conclude that human brains encode broad restrictions on syllable structure.
All spoken languages express words by sound patterns, and certain patterns (e.g., blog) are systematically preferred to others (e.g., lbog). What principles account for such preferences: does the language system encode abstract rules banning syllables like lbog, or does their dislike reflect the increased motor demands associated with speech production? More generally, we ask whether linguistic knowledge is fully embodied or whether some linguistic principles could potentially be abstract. To address this question, here we gauge the sensitivity of English speakers to the putative universal syllable hierarchy (e.g., blif≻bnif≻bdif≻lbif) while undergoing transcranial magnetic stimulation (TMS) over the cortical motor representation of the left orbicularis oris muscle. If syllable preferences reflect motor simulation, then worse-formed syllables (e.g., lbif) should (i) elicit more errors; (ii) engage more strongly motor brain areas; and (iii) elicit stronger effects of TMS on these motor regions. In line with the motor account, we found that repetitive TMS pulses impaired participants' global sensitivity to the number of syllables, and functional MRI confirmed that the cortical stimulation site was sensitive to the syllable hierarchy. Contrary to the motor account, however, ill-formed syllables were least likely to engage the lip sensorimotor area and they were least impaired by TMS. Results suggest that speech perception automatically triggers motor action, but this effect is not causally linked to the computation of linguistic structure. We conclude that the language and motor systems are intimately linked, yet distinct. Language is designed to optimize motor action, but its knowledge includes principles that are disembodied and potentially abstract.any animal species communicate using vocal patterns, and humans are no exception. Every hearing human community preferentially expresses words by oral patterns (1). Speech sounds, such as d,o,g give rise to contrasting patterns (e.g., dog vs. god), and certain speech patterns are systematically preferred to others. Syllables like blog, for instance, are more frequent across languages than lbog (2). Behavioral experiments further demonstrate similar preferences among individual speakers despite no experience with either syllable type (3)(4)(5)(6).Although such facts demonstrate that the sound patterns of language are systematically constrained, the nature of such constraints remains unknown. One explanation invokes universal linguistic constraints on the sound structure of language (7). However, in an alternative account, these patterns are thought to reflect motor, rather than linguistic, constraints caused by their embodiment in the motor system of speech (8-11). Indeed, the speech patterns that are attested across spoken languages are not arbitrary, and frequent patterns tend to optimize speech production (12). Such observations open up the possibility that the so-called "language universals" are action based. In this view, the encoding of a speech stimulus engages ...
Across languages, certain onset clusters are systematically preferred (e.g., [Formula: see text], "[Formula: see text]" indicates preference), and speakers extend these preferences even to onsets that are unattested in their language. All such demonstrations, however, come from cluster-rich languages, so the observed preferences could reflect not universal linguistic restrictions but lexical analogy. To address this possibility, here, we turn to Mandarin Chinese-a cluster-poor language. We reasoned that, if people are sensitive to the onset hierarchy, then they should repair ill-formed onsets as better-formed ones [Formula: see text]-the worse formed the onset, the more likely its repair, hence, its misidentification. Results were consistent with this hypothesis, and they obtained irrespective of participants' experience with their second language (English). Nonetheless, the effect of syllable structure was strongly modulated by phonetic cues and task demands. These findings suggest that speakers might share broad phonological restrictions, but phonetic factors play a major role in their detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.