Twelve female normal listeners, all right-handed, heard rhyming pairs of (stop+/a/) nonsense syllables dichotically and monotically. Word onsets were simultaneous, then shifted by 15, 30, 60, and 90 msec. Each subject heard 600 pairs. Monotically, the lag-syllable discriminations were poor at all delays with differences leveling off at 30 msec (93% lead vs 19% lag). Dichotically, lag-ear discrimination was roughly 22% better for all lag times when the right ear lagged. [First found by Shankweiler and Studdert-Kennedy, personal communication.] Left-ear lag scores improved and overcame the right-ear advantage only after 30-msec delay. Total right-ear scores for entire dichotic portion of experiment (M=77%) exceeded total left-ear scores (M=66%) thus maintaining an over-all right-ear laterality effect. Thus, previous experimenters who allowed as much as 90-msec delay to be randomly distributed among their so-called “simultaneous pairs” might still have expected to find a right-ear laterality effect. [Supported in part by NINDS.]
Competing rhyming pairs of both natural and synthetic speech were presented both monotically and dichotically to normal right-handed listeners. When the onsets of the words were simultaneous (±2½ msecs), the right ear scores were generally higher than the left ear scores. However, voiceless consonants were much more intelligible dichotically than voiced consonants, regardless of which ear received the voiceless consonant. When both consonants competed monotically, the difference between voiced and voiceless consonant perception was either reversed or markedly attenuated. An explanation based on lagging of the aperiodic-to-periodic transition of the voiceless CV is offered.
We had previously reported [J. Acoust. Soc. Amer. 45, 299 (A) (1969)] that in dichotic listening to consonant-vowel (CV) utterances in natural speech, more voiceless consonants were correctly perceived than voiced consonants. In that experiment, the voiceless CVs had a slightly higher fundamental frequency than the voiced; therefore, synthetic CVs with uniform fundamental frequency and duration were used in the present experiment. Twenty normal right-handed females listened to simultaneous (±212 msec) (stop+/a/) synthetic nonsense syllables both monotically and dichotically. In addition to the expected right-ear laterality effect in dichotic listening, we confirmed our previous finding: dichotically, voiceless consonants predominated (73% vs 48%). Monotically, voiced consonants were most often heard correctly (60% vs 47%). An explanation related to onset of change from aperiodic to periodic portions of voiceless vs voiced utterances is presented. [We are grateful to Arthur Abramson and Lee Lisker for the use of theiry synthetic speech material. Supported in part by NINDS.]
Patients with circumscribed temporal lobe lesions show large contralateral ear decrements in simultaneous dichotic speech tasks. However, when noise is in the ipsilateral ear, there is little decrement in the contralateral ear scores. As intensity of speech in the ipsilateral ear gradually increases above threshold, scores of the contralateral ear decrease markedly until, with even 30-dB intensity difference in its favor, the contralateral ear still shows a decrement from its monaural level. The nervous system seems to signal its recognition of “speech” in the ipsilateral ear by suppressing the performance of the contralateral ear.
Rhyming monosyllabic pairs that differ only by the initial plosive consonant were recorded on separate channels to be “simultaneous in onset” within 5 msec. Onsets of the words were also staggered by 10, 20, 30, 40, 50, and 100 msec (±2.5 msec). Unvoiced consonants are perceived more readily in the simultaneous listening task. When control for frequency of occurrence of voiced versus voiceless consonants is made, central laterality effects are overshadowed by overriding intelligibility of voiceless consonants. The significance of these findings with respect to temporal coding of speech will be discussed. [Work supported by the National Institute for Neurological Diseases and Blindness.]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.