Nasal consonants in syllabic coda position in Japanese assimilate to the place of articulation of a following consonant. The resulting forms may be perceived as different realizations of a single underlying unit, and indeed the kana orthographies represent them with a single character. In the present study, Japanese listeners' response time to detect nasal consonants was measured. Nasals in coda position, i.e., moraic nasals, were detected faster and more accurately than nonmoraic nasals, as reported in previous studies. The place of articulation with which moraic nasals were realized affected neither response time nor accuracy. Non-native subjects who knew no Japanese, given the same materials with the same instructions, simply failed to respond to moraic nasals which were realized bilabially. When the nasals were cross-spliced across place of articulation contexts the Japanese listeners still showed no significant place of articulation effects, although responses were faster and more accurate to unspliced than to cross-spliced nasals. When asked to detect the phoneme following the ͑cross-spliced͒ moraic nasal, Japanese listeners showed effects of mismatch between nasal and context, but non-native listeners did not. Together, these results suggest that Japanese listeners are capable of very rapid abstraction from phonetic realization to a unitary representation of moraic nasals; but they can also use the phonetic realization of a moraic nasal effectively to obtain anticipatory information about following phonemes.
Both English and Japanese have two voiceless sibilant fricatives, an anterior fricative /s/ contrasting with a more posterior fricative /$/. When children acquire sibilant fricatives, English children typically substitute [s] for /$/, whereas Japanese children typically substitute [$] for /s/. This study examined English-and Japanese-speaking adults' perception of children's productions of voiceless sibilant fricatives to investigate whether the apparent asymmetry in the acquisition of voiceless sibilant fricatives reported previously in the two languages was due in part to how adults perceive children's speech. The results of this study show that adult speakers of English and Japanese weighed acoustic parameters differently when identifying fricatives produced by children and that these differences explain, in part, the apparent cross-language asymmetry in fricative acquisition. This study shows that generalizations about universal and language-specific patterns in speechsound development cannot be determined without considering all sources of variation including speech perception.
In four experiments, we investigated how listeners compensate for reduced /t/ in Dutch. Mitterer and Ernestus [Mitterer, H., & Ernestus, M. (2006). Listeners recover /t/s that speakers lenite: evidence from /t/-lenition in Dutch. Journal of Phonetics, 34, showed that listeners are biased to perceive a /t/ more easily after /s/ than after /n/, compensating for the tendency of speakers to reduce word-final /t/ after /s/ in spontaneous conversations. We tested the robustness of this phonological context effect in perception with three very different experimental tasks: an identification task, a discrimination task with native listeners and with non-native listeners who do not have any experience with /t/-reduction, and a passive listening task (using electrophysiological dependent measures). The context effect was generally robust against these experimental manipulations, although we also observed some deviations from the overall pattern. Our combined results show that the context effect in compensation for reduced /t/ results from a complex process involving auditory constraints, phonological learning, and lexical constraints.
English neighborhood literature has demonstrated that neighborhood density affects the auditory word recognition. However, an unresolved question is exactly how neighborhood density should be calculated. In this paper the definition of a lexical neighborhood is explored in Japanese. Data for the analyses were collected from Japanese neighborhood experiments using the same 700 test words and a lexicon that consisted of only nouns from the NTT psycholinguistic database [Amano and Konodo, 1999]. Three different neighborhood calculations were used to analyze the data. The first calculation was based on Greenberg–Jenkins phoneme substitution, deletion, and insertion rules. The second calculation included prosodic information as another dimension in the neighborhood calculation in order to reflect the finding that prosodic information has a vital role in Japanese word recognition. The third calculation was based on the auditory properties of the words in the lexicon. Neighborhood density was measured by comparing the similarity of cochleagrams of the 66<th>000 audio files. The results of the analyses demonstrated that phonological similarity within the lexicon seems to be calculated based on higher-level abstract representations rather than on a lower-level acoustic-auditory representation in any of the experiments. The implications from the results to the current word recognition theories will also be discussed. [Work supported by NIH.]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.