Stimulation of one sensory modality can induce perceptual experiences in another modality that reflect synaesthetic correspondences among different dimensions of sensory experience. In visual-hearing synaesthesia, for example, higher pitched sounds induce visual images that are brighter, smaller, higher in space, and sharper than those induced by lower pitched sounds. Claims that neonatal perception is synaesthetic imply that such correspondences are an unlearned aspect of perception. To date, the youngest children in whom such correspondences have been confirmed with any certainty were 2- to 3-year-olds. We examined preferential looking to assess 3- to 4-month-old preverbal infants' sensitivity to the correspondences linking auditory pitch to visuospatial height and visual sharpness. The infants looked longer at a changing visual display when this was accompanied by a sound whose changing pitch was congruent, rather than incongruent, with these correspondences. This is the strongest indication to date that synaesthetic cross-modality correspondences are an unlearned aspect of perception.
Over half the world's population speaks a tone language, yet infant speech perception research has typically focused on consonants and vowels. Very young infants can discriminate a wide range of native and nonnative consonants and vowels, and then in a process of perceptual reorganization over the 1 st year, discrimination of most nonnative speech sounds deteriorates. We investigated perceptual reorganization for tones by testing 6and 9-month-old infants from tone (Chinese) and nontone (English) language environments for speech (lexical tone) and nonspeech (violin sound) tone discrimination in both cross-sectional and longitudinal studies. Overall, Chinese infants performed equally well at 6 and 9 months for both speech and nonspeech tone discrimination. Conversely, English infants' discrimination of lexical tone declined between 6 and 9 months of age, whereas their nonspeech tone discrimination remained constant. These results indicate that the reorganization of tone perception is a function of the native language environment, and that this reorganization is linguistically based.Infants in their first months after birth discriminate a wide range of speech contrasts, both native and nonnative. A process ofperceptual reorganization takes place over the first year, such that discrimination of most nonnative speech sounds deteriorates
Certain correspondences between the sound and meaning of words can be observed in subsets of the vocabulary. These sound-symbolic relationships have been suggested to result in easier language acquisition, but previous studies have explicitly tested effects of sound symbolism on learning category distinctions but not on word learning. In 2 word learning experiments, we varied the extent to which phonological properties related to a rounded-angular shape distinction and we distinguished learning of categories from learning of individual words. We found that sound symbolism resulted in an advantage for learning categories of sound-shape mappings but did not assist in learning individual word meanings. These results are consistent with the limited presence of sound symbolism in natural language. The results also provide a reinterpretation of the role of sound symbolism in language learning and language origins and a greater specification of the conditions under which sound symbolism proves advantageous for learning.
English, French, and bilingual English‐French 17‐month‐old infants were compared for their performance on a word learning task using the Switch task. Object names presented a /b/ vs. /g/ contrast that is phonemic in both English and French, and auditory strings comprised English and French pronunciations by an adult bilingual. Infants were habituated to two novel objects labeled ‘bowce’ or ‘gowce’ and were then presented with a switch trial where a familiar word and familiar object were paired in a novel combination, and a same trial with a familiar word–object pairing. Bilingual infants looked significantly longer to switch vs. same trials, but English and French monolinguals did not, suggesting that bilingual infants can learn word–object associations when the phonetic conditions favor their input. Monolingual infants likely failed because the bilingual mode of presentation increased phonetic variability and did not match their real‐world input. Experiment 2 tested this hypothesis by presenting monolingual infants with nonce word tokens restricted to native language pronunciations. Monolinguals succeeded in this case. Experiment 3 revealed that the presence of unfamiliar pronunciations in Experiment 2, rather than a reduction in overall phonetic variability was the key factor to success, as French infants failed when tested with English pronunciations of the nonce words. Thus phonetic variability impacts how infants perform in the switch task in ways that contribute to differences in monolingual and bilingual performance. Moreover, both monolinguals and bilinguals are developing adaptive speech processing skills that are specific to the language(s) they are learning.
Mutual Exclusivity (ME) is a prominent constraint in language acquisition, which guides children to establish one-to-one mappings between words and referents. But how does unfolding experience of multiple-to-one word-meaning mappings in bilingual children's environment affect their understanding of when to use ME and when to accept lexical overlap? Three-to-five-year-old monolingual and simultaneous bilingual children completed two pragmatically distinct tasks, where successful word learning relied on either the default use of ME or the ability to accept overlapping labels. All children could flexibly use ME by following the social-pragmatic directions available in each task. However, linguistic experience shaped the development of ME use, whereby older monolinguals showed a greater reliance on the one-to-one mapping assumption, but older bilinguals showed a greater ability to accept lexical overlap. We suggest that flexible use of ME is thus shaped by pragmatic information present in each communicative interaction and children's individual linguistic experience.
This study compared tone sensitivity in monolingual and bilingual infants in a novel word learning task. Tone language learning infants (Experiment 1, Mandarin monolingual; Experiment 2, Mandarin-English bilingual) were tested with Mandarin (native) or Thai (non-native) lexical tone pairs which contrasted static vs. dynamic (high vs. rising) tones or dynamic vs. dynamic (rising vs. falling) tones. Non-tone language, English-learning infants (Experiment 3) were tested on English intonational contrasts or the Mandarin or Thai tone contrasts. Monolingual Mandarin language infants were able to bind tones to novel words for the Mandarin High-Rising contrast, but not for the Mandarin Rising-Falling contrast; and they were insensitive to both the High-Rising and the Rising-Falling tone contrasts in Thai. Bilingual English-Mandarin infants were similar to the Mandarin monolinguals in that they were sensitive to the Mandarin High-Rising contrast and not to the Mandarin Rising-Falling contrast. However, unlike the Mandarin monolinguals, they were also sensitive to the High Rising contrast in Thai. Monolingual English learning infants were insensitive to all three types of contrasts (Mandarin, Thai, English), although they did respond differentially to tone-bearing vs. intonation-marked words. Findings suggest that infants' sensitivity to tones in word learning contexts depends heavily on tone properties, and that this influence is, in some cases, stronger than effects of language familiarity. Moreover, bilingual infants demonstrated greater phonological flexibility in tone interpretation.
Learning to map words onto their referents is difficult, because there are multiple possibilities for forming these mappings. Cross-situational learning studies have shown that word-object mappings can be learned across multiple situations, as can verbs when presented in a syntactic context. However, these previous studies have presented either nouns or verbs in ambiguous contexts and thus bypass much of the complexity of multiple grammatical categories in speech. We show that noun word learning in adults is robust when objects are moving, and that verbs can also be learned from similar scenes without additional syntactic information. Furthermore, we show that both nouns and verbs can be acquired simultaneously, thus resolving category-level as well as individual word-level ambiguity. However, nouns were learned more quickly than verbs, and we discuss this in light of previous studies investigating the noun advantage in word learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.