It is well known that speech perception is deeply affected by the phoneme categories of the native language. Recent studies have found that phonotactics, i.e., constraints on the cooccurrence of phonemes within words, also have a considerable impact on speech perception routines. For example, Japanese does not allow (nonnasal) coda consonants. When presented with stimuli that violate this constraint, as in / ebzo/, Japanese adults report that they hear a /u/ between consonants, i.e., /ebuzo/. We examine this phenomenon using event-related potentials (ERPs) on French and Japanese participants in order to study how and when the phonotactic properties of the native language affect speech perception routines. Trials using four similar precursor stimuli were presented followed by a test stimulus that was either identical or different depending on the presence or absence of an epenthetic vowel /u/ between two consonants (e.g., "ebuzo ebuzo ebuzo- ebzo"). Behavioral results confirm that Japanese, unlike French participants, are not able to discriminate between identical and deviant trials. In ERPs, three mismatch responses were recorded in French participants. These responses were either absent or significantly weaker for Japanese. In particular, a component similar in latency and topography to the mismatch negativity (MMN) was recorded for French, but not for Japanese participants. Our results suggest that the impact of phonotactics takes place early in speech processing and support models of speech perception, which postulate that the input signal is directly parsed into the native language phonological format. We speculate that such a fast computation of a phonological representation should facilitate lexical access, especially in degraded conditions.
The location of phonological phrase boundaries was shown to affect lexical access by English-learning infants of 10 and 13 months of age. Experiments 1 and 2 used the head-turn preference procedure: infants were familiarized with two bisyllabic words, then presented with sentences that either contained the familiarized words or contained both their syllables separated by a phonological phrase boundary. Ten-month-olds did not show any listening preference, whereas 13-month-olds listened significantly longer to sentences containing the familiarized words. Experiments 3 and 4 relied on a variant of the conditioned head-turning technique. In a first session, infants were trained to turn their heads for an isolated bisyllabic word. In the second session, they were exposed to the same sentences as above. Both 10-and 12.5-month-old infants turned significantly more often when the target word truly appeared in the sentence. These results suggest that phonological phrase boundaries constrain on-line lexical access in infants.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.