An experiment was conducted to determine if the duty cycle and period of a train of tone pulses presented simultaneously to the comparison ear influence in any way the adaptation measured at the opposite (test) ear. Eight normal-hearing listeners were adapted for 5 min to a steady 1-kHz pure tone at 60 dB SPL. Using a tracking procedure, adaptation over the 5-min period was measured under each of five comparison-signal conditions, each comprised of pulse trains having different on/off ratios. The five on/off ratios (in milliseconds) were: 200/800 (20% duty cycle); 500/500, 200/200, and 800/800 (50% duty cycle); and 800/200 (80% duty cycle). Listeners received each condition ten times. The comparison signals had a frequency of 1000 Hz. There was a clear tendency for adaptation to increase as the duty cycle of the comparison tone increased from 20% to 80%. This was evident even when attempts were made to take into account the extent to which the pulse trains might have been perceived as less loud than continuous signals at the same level (the so-called LOT effect). For comparison tones with a constant (50%) duty cycle, the same amount of test-ear adaptation was measured whether the on-time of the signals was 200, 500, or 800 msec. [Work supported by the Research and Education Committee, Oklahoma City Veteran's Administration Hospital, under Project No. 20-69.]
A study was conducted to explore the effects of frequency, method, and level on temporal integration both at and above threshold. Listeners' thresholds were measured at three frequencies (250, 500, and 4000 Hz) using two methods (tracking and 2 IFC adaptive procedure), for duration of 20, 80, and 640 ms. Employing the same frequencies and signal durations, loudness-balance data were obtained from the same group of listeners at 40 and 70 dB SPL using two different loudness balance methods (a method of adjustment and an adaptive procedure). Major findings were (1) threshold integration varied inversely with frequency, particularly for the tracking procedure; (2) temporal loudness summation varied inversely with level for both methods with little difference in integration values as a function of frequency; and (3) no one method emerged superior with respect to variability at either threshold or suprathreshold. [Work supported by the Veterans Administration.]
A loudness summation study from this laboratory, using signal durations from 10 to 500 msec, failed to demonstrate a definite critical duration (i.e., a straight-line fit to the data gave r's:⩾0.97). A more extensive investigation, involving a number of durations beyond 500 msec, was undertaken to examine the problem in more detail. Employing 12 signal durations (from 10 msec to 3 sec) and using a transformed up/down procedure to collect the data, temporal integration functions at 1000 Hz were determined on six normal-hearing listeners both at threshold and at three suprathreshold intensities: 30, 60, and 90 dB SL. For the suprathreshold conditions listeners balanced the loudness of each tone to that of a 1-sec reference tone. Group data showed (1) evidence of a critical duration of somewhere between 150 and 300 msec at all levels, although this value was highly dependent upon how the critical duration was determined, and (2) some tendency toward reduced steepness of the loudness summation function below the critical duration with increasing level. [Work supported by the Research and Education Committee, Oklahoma City, V.A. Hospital.]
Eight listeners were exposed via earphones to two different octave bands of noise: 500–1000 Hz at an intensity of 115 dB SPL, and 1500–3000 Hz at an SPL of 110 dB. Six separate exposures to each noise were conducted: once to the noise presented continuously, and once to each of five intermittent noise conditions. For the intermittent conditions, “on” time was held constant at 50 msec and “off” times varied between 50 and 450 msec. Exposure durations were based on off times (the longer the off time the longer the exposure), such that all exposures contained the same energy. Exposure durations ranged from 3 min for the continuous exposure to 30 min for the 450-msec-off condition. TTS2 was measured after each exposure for three test frequencies within 1 oct above the upper cutoff frequency of the noise. There was a systematic tendency for the mean TTS to increase with off time for the low-frequency exposure, in contrast to a much flatter function following exposure to the high-frequency noise. The results are interpreted in terms of their relation temporally to the latency and relaxation of the acoustic reflex. [Research supported by the Deafness Research Foundation.]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.