The human auditory system's ability to recognize simple melodies that correspond to fundamental periods in sequences of periodic sounds devoid of fundamental energy was studied through musical interval identification experiments. Stimuli comprising two randomly chosen successive upper harmonics were presented both monotically (two harmonics to one ear) and dichotically (one harmonic to each ear). Subjects could recognize melodies equally well with both modes of stimulus presentation. The results imply that the pitch of these complex tones is mediated by a central processor operating on neural signals derived from those effective stimulus harmonics that are tonotopically resolved. 520Volume 51 Number 2 (Part 2) 1972
According to a recent extension of our theory of intensity perception [Lim et al., J. Acoust. Soc. Am 62, 1256-1267 (1977)], two stimuli are matched in loudness if and only if their intensities divide the respective dynamic ranges proportionally in terms of just noticeable differences. This study reports results of intensity discrimination and loudness matching experiments designed to test this prediction. Data were obtained over most of the dynamic range for three different types of sounds: a 1000-Hz tone in quiet, a 1000-Hz tone partially masked by a 2-octave band of noise, and spectrally flat wide-band noise. Of the five subjects tested, three produced results that had sufficient internal consistency to be useful for testing the predictions. For these subjects, the data and the theory were found to be reasonably consistent. Comparison with other studies, however, reveals that loudness matching results vary with matching paradigm by an amount that is significant with respect to the task of evaluating the theory. Hence, a rigorous test of the theory requires an improved understanding of the effects of matching paradigm.
Psychoacoustical tuning curves and interaural pitch matches were measured in a listener with a unilateral, moderately severe hearing loss of primarily cochlear origin below 2 kHz. The psychoacoustical tuning curves, measured in a simultaneous-masking paradigm, were obtained at 1 kHz for probe levels of 4.5-, 7-, and 13-dB SL in the impaired ear, and 7-dB SL in the normal ear. Results show that as the level of the probe increased from 4.5- to 13-dB SL in the impaired ear, (1) the frequency location of the tip of the tuning curve decreased from approximately 2.85 to 2.20 kHz and (2) the lowest level of the masker required to just mask the probe increased from 49- to 83-dB SPL. The tuning curve in the normal ear was comparable to data from other normal listeners. The interaural pitch matches were measured from 0.5 to 6 kHz at 10-dB SL in the impaired ear and approximately 15- to 20-dB SL in the normal ear. Results show reasonable identity matches (e.g., a 500-Hz tone in the impaired ear was matched close to a 500-Hz tone in the normal ear), although variability was significantly greater for pitch matches below 2 kHz. The results are discussed in terms of their implications for models of pitch perception.
Two same-different discrimination experiments were performed for click patterns having a total duration of about 4 sec and interclick intervals of n x 250 msec, with n a random integer. In Experiment 1, the influence of the physical click group structure on discrimination performance was investigated. In Experiment 2, the effect of the strength of an induced internal clock on discrimination performance was measured. Performance was poor if the group structure of clicks was maintained during a change in click pattern and also if the induced internal clock strength was low. The performance of about 70% of the subjects improved significantly if either a change in click grouping structure occurred or a strong internal clock could be induced. These results cannot be accounted for with simple models based on single-interval duration discrimination or between-pattern correlation statistics. This paper deals with the cognitive representation of rhythmic patterns in music, particularly with factors that either enhance or inhibit our ability to detect small changes in such patterns. The sound patterns we studied are simple click patterns, which are quasirnusical in the sense that clicks are separated by time intervals that are integer multiples of some basic time interval. Click patterns are devoid of any pitch, timbre, or dynamic variation. The use of such stimuli permits one to avoid possibly confounding influences of pitch, timbre, or loudness on perceived rhythm, as is easily the case when one is listening to real music. It nevertheless preserves a situation close enough to musical practice to ensure that results will be musically relevant.Handel (1989) has defined rhythm as an interplay between meter and grouping. According to this viewpoint, we might consider rhythm as a sequence of acoustic events that (1) interact with the implemented periodical framework (i.e., meter) and (2) may be divided into a number of sound-event clusters or groups. His dichotomy between meter and grouping implies a relationship between serial and hierarchical processing of temporal auditory information. Grouping would apparently be associated with serial processing, while meter presumes complex multilevel coding of sound information.The influence of both meter and grouping of events on rhythm perception has been confirmed by data from several investigations. One of the first among these, by Royer and Garner (1966), established the importance of groups of adjacent sound elements in a pattern (called runs) for its perceptual organization. Subjects had to reproduce the This study was made possible by a postdoctoral research fellowship grant from the Eindhoven University of Technology to J.R., who is affiliated with the Institute of Language and Literature,
This study examines subjects' ability to recognize the pitches of two missing fundamentals in two simultaneous two-tone complexes whose partials are distributed in various ways between subjects' ears. The data show that identification performance is affected on different levels. Limited frequency resolution in the peripheral auditory system can degrade performance, but only if none of the four stimulus partials is aurally resolved. Identification performance is only weakly dependent on the manner of distributing partials between the ears. In some cases it was found that, probably at a very central level (e.g., attention), the identification processes of both simultaneous pitches interfere with one another. Some subjects are more likely to identify the pitch of one two-tone complex when the harmonic order of the other complex is higher than when this harmonic order is lower. Finally, some subjects tend to hear the complex tones analytically, i.e., perceive pitches of single partials instead of the missing fundamentals for some distribution of partials between the ears.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.