Language rhythm determines young infants' language discrimination abilities. However, it is unclear whether young bilingual infants exposed to rhythmically similar languages develop sensitivities to cross‐linguistic rhythm cues to discriminate their dual language input. To address this question, 3.5‐month‐old monolingual Basque, monolingual Spanish and bilingual Basque‐Spanish infants' language discrimination abilities (across low‐pass filtered speech samples of Basque and Spanish) have been tested using the visual habituation procedure. Although falling within the same rhythmic class, Basque and Spanish exhibit significant differences in their distributions of vocalic intervals (within‐rhythmic class variation). All infant groups in our study successfully discriminated between the languages, although each group exhibited a different pattern. Monolingual Spanish infants succeeded only when they heard Basque during habituation, suggesting that they were influenced by native language recognition. The bilingual and the Basque monolingual infants showed no such asymmetries and succeeded irrespective of the language of habituation. Additionally, bilingual infants exhibited longer looking times in the test phase as compared with monolinguals, reflecting that bilingual infants attend to their native languages differently than monolinguals. Overall, results suggest that bilingual infants are sensitive to within‐rhythm acoustic regularities of their native language(s) facilitating language discrimination and hence supporting early bilingual acquisition.
Research on cross-language vowel perception in both infants and adults has shown that for many vowel contrasts, discrimination is easier when the same pair of vowels is presented in one direction compared to the reverse direction. According to one account, these directional asymmetries reflect a universal bias favoring "focal" vowels (i.e., vowels whose adjacent formants are close in frequency, which concentrates acoustic energy into a narrower spectral region). An alternative, but not mutually exclusive, account is that such effects reflect an experience-dependent bias favoring prototypical instances of native-language vowel categories. To disentangle the effects of focalization and prototypicality, we first identified a certain location in phonetic space where vowels were consistently categorized as /u/ by both Canadian-English and Canadian-French listeners, but that nevertheless varied in their stimulus goodness (i.e., the best Canadian-French /u/ exemplars were more focal compared to the best Canadian-English /u/ exemplars). In subsequent AX discrimination tests, both Canadian-English and Canadian-French listeners performed better at discriminating changes from less to more focal /u/'s compared to the reverse, regardless of variation in prototypicality. These findings demonstrate a universal bias favoring vowels with greater formant convergence that operates independently of biases related to language-specific prototype categorization.
Duration‐based auditory grouping preferences are presumably shaped by language experience in adults and infants, unlike intensity‐based grouping that is governed by a universal bias of a loud‐soft preference. It has been proposed that duration‐based rhythmic grouping preferences develop as a function of native language phrasal prosody. Additionally, it has been suggested that phrasal prosody supports syntax acquisition (e.g., prosodic bootstrapping of word order within phrases). Using a looking preference procedure, in the current study, 9‐to‐10‐month‐old Spanish‐dominant and Basque‐dominant bilingual infants’ rhythmic preferences in response to nonlinguistic tones alternating in duration or intensity were assessed. In the intensity‐based condition no effects of language experience was present. In the duration‐based condition, however, infants exhibited grouping patterns as predicted by the phrasal prosody of their dominant input. Considering the proposed link between syntactic bootstrapping and perceptual tone grouping, our overall results suggest that syntax acquisition (e.g., learning the rules of word order) is supported by different auditory perceptual mechanisms for the dominant syntax than for the less dominant syntax in the infant's dual language input.
The present study investigated the proactive nature of the human brain in language perception. Specifically, we examined whether early proficient bilinguals can use interlocutor identity as a cue for language prediction, using an event-related potentials (ERP) paradigm. Participants were first familiarized, through video segments, with six novel interlocutors who were either monolingual or bilingual. Then, the participants completed an audio-visual lexical decision task in which all the interlocutors uttered words and pseudo-words. Critically, the speech onset started about 350 ms after the beginning of the video. ERP waves between the onset of the visual presentation of the interlocutors and the onset of their speech significantly differed for trials where the language was not predictable (bilingual interlocutors) and trials where the language was predictable (monolingual interlocutors), revealing that visual interlocutor identity can in fact function as a cue for language prediction, even before the onset of the auditory-linguistic signal.
Abstract:Recently it has been proposed that sensitivity to non-arbitrary relationships between speech sounds and objects potentially bootstraps lexical acquisition. However, it is currently unclear whether preverbal infants (e.g., before 6 months of age) with different linguistic profiles are sensitive to such non-arbitrary relationships. Here, we assessed 4-and 12-month-old Basque monolingual and Spanish-Basque bilingual infants' sensitivity to cross-modal correspondences between sound symbolic non-words without syllable repetition ('buba', 'kike') and drawings of rounded and angular shapes. Our findings demonstrate that sensitivity to sound-shape correspondences emerge by 12 months of age in both monolinguals and bilinguals. This finding suggests that spontaneous soundshape matching is likely to be the product of language learning and development and may not be readily available prior to the onset of word learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.