Oxford University Press, New York, 1990. x 4-307pp. Price $60.00. ISBN.'O-19-505475-X.When Pythagoras and his students first tried to define which sounds fit together pleasantly, they began a millennia-long series of European and American investigations into the nature of musical-pitch perception. Many of us who think seriously about musical pitch tend to forget that our views are focused through the lens of eighteenth and nineteenth century Western music. In that music, listeners' perceptions are closely tied to the key in which the composer chose to write. Carol Krumhansl's Cognitive Foundations of Musical Pitch adduces data on the use and perception of musical pitch from the key-bound works of composers like Bach, Chopin, and Mozart, but also samples some of the early key-uncertain experiments of composers like Schoenberg and Stravinsky. Then, in a step toward unveiling culture-free aspects of musical pitch, she includes a section on non-Western music, some of which uses microtonal tunings.Psychoacoustic studies of pitch generally examine the sensation that commonly changes with changes in physical frequency. Krumhansl's studies of pitch generally examine the degree of perceived relation between a tone or chord and an adjacent tone or melody or chord. Her application of the term pitch is thus closer to the concepts of relative pitch and musical interval than to auditory frequency analysis. However, the fact that the word carries a meaning that is different from the one most readers of this Journal habitually use does not detract from the value of the book. It does mean that readers who have not looked before at music research will find the first few minutes of reading to be complicated by the unfamiliar use of a familiar term.
In analyzing interaural temporal relations, the binaural system may receive information from one or more of three separate stimulus aspects: (1) difference in time of the start of stimulation, (2) difference in time between similar portions of the continuing wave form at the two ears, and (3) difference in time of the end of stimulation. In this study, the first and third kinds of difference were combined for convenience as “transient disparity”; the second was called “ongoing disparity.” The relative effectiveness of these two temporal relations in producing changes in auditory localization was investigated by finding, for various values of transient disparity, a value of ongoing disparity that brought the sound back to center. For a given value of transient disparity, the necessary ongoing disparity value varies as a function of stimulus duration. Transient disparity loses its effectiveness for stimulus durations greater than about 150 msec. For a duration of 100 msec, it takes roughly 35 times as much transient disparity as ongoing disparity to bring the sound to center; for a duration of 30 msec, it takes about 7 times as much; and for a duration of 10 msec, 4 to 5 times as much. From the working hypothesis that the relative values of transient and ongoing disparities are directly proportional to the durations over which each cue is operative, an “effective onset duration” appears to lie between 2 and 4 msec.
Measurement on experienced listeners of interaural time difference (ITD) thresholds for wide-band random noise indicates that the threshold varies systematically with duration of stimulation. In order to determine the point at which increase duration no longer decreases ITD threshold, stimulus (noise burst) duration was varied between 0.01 and 1.94 sec. A given ITD was maintained throughout any particular burst, starting time included. All stimuli were presented at a level of 65 db SPL to each phone. The “duration” versus “ITD threshold” function reaches asymptote at approximately 0.7 sec, indicating that the binaural system which effects the comparison necessary for a lateralization judgment may integrate information over that period for the kind of stimulus used.
is determined by the time of day you choose for your experiments. I often demonstrate the binaural vs monaural effects by having a person in a crowded restaurant close off one ear with the finger. He then loses the ability to concentrate on one chosen talker when three or four persons are talking simultaneously.When the organized sounds are music rather than speech, true binaural listening gives the listener the ability to concentrate on (i.e., selectively amplify) one choir of instruments at the expense of all the others. The comparison with cross-correlation performance in 2-channel receiving systems is striking.All in all, the compromise system has enough good points in it to merit serious consideration. It is not an antisocial device because you can carry on a conversation with your guests at least as well as with two loudspeakers going. You need not be "wired for sound" if you use any of the well-known wireless methods such as an induction loop, a radio transmitter, or the ultrasonic transducer mentioned in the foregoing. The subjective results are almost as good as true binaural listening, and orders of magnitude better than a 2-channel stereophonic system (which is capable of lateral and depth localization and absolutely nothing more).N the past few decades, several attempts have been made to record the relative frequency of occurrence of the phonemic elements of American speech. Almost every attempt has left some question of validity. One • was derived primarily from written material; another •' from radio announcements; a third a from the speech of 30-month-old children.The study of telephone conversations by French, Carter, and Koenig, 4 however, seems to have avoided many of the pitfalls of validity. The language was that of adult Americans, speaking extemporaneously, and was recorded verbatim with a few logical limitations (stereotyped telephone greetings, nonfluencies, and profanity were omitted, for instance). Yet, a number of times when investigators have considered using the French, Carter, and Koenig data, some uncertainty has arisen because of certain details of the method they used. The original recordings were stenographic; the phonetic transcription was done by the authors --engineers--who attempted to transcribe words according to what "we regarded as being typical pronounciation heard in reasonably enunciated conversation among educated persons in New York." In addition, two of the authors were from the eastern part of the country. Considering these problems of transcription, one might well assume that the table of relative frequency of occurrence of phonemic elements published in that 1930 paper had certain phonemes weighted out of proportion as a result of the peculiarities of the regional speech represented.Several years ago, it became necessary for the author to retranscribe the word lists used by French, Carter, and Koenig according to the General American pronunciations suggested by Kenyon and Knott. 5 A copy of the original published list, corrected by N. R. French, was used. In thos...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.