BackgroundHigh noise levels in the intensive care unit (ICU) are a well-known problem. Little is known about the effect of noise on sleep quality in ICU patients. The study aim is to determine the effect of noise on subjective sleep quality.MethodsThis was a multicenter observational study in six Dutch ICUs. Noise recording equipment was installed in 2–4 rooms per ICU. Adult patients were eligible for the study 48 h after ICU admission and were followed up to maximum of five nights in the ICU. Exclusion criteria were presence of delirium and/or inability to be assessed for sleep quality. Sleep was evaluated using the Richards Campbell Sleep Questionnaire (range 0–100 mm). Noise recordings were used for analysis of various auditory parameters, including the number and duration of restorative periods. Hierarchical mixed model regression analysis was used to determine associations between noise and sleep.ResultsIn total, 64 patients (68% male), mean age 63.9 (± 11.7) years and mean Acute Physiology And Chronic Health Evaluation (APACHE) II score 21.1 (± 7.1) were included. Average sleep quality score was 56 ± 24 mm. The mean of the 24-h average sound pressure levels (LAeq, 24h) was 54.0 dBA (± 2.4). Mixed-effects regression analyses showed that background noise (β = − 0.51, p < 0.05) had a negative impact on sleep quality, whereas number of restorative periods (β = 0.53, p < 0.01) and female sex (β = 1.25, p < 0.01) were weakly but significantly correlated with sleep.ConclusionsNoise levels are negatively associated and restorative periods and female gender are positively associated with subjective sleep quality in ICU patients.Trial registrationwww.ClinicalTrials.gov, NCT01826799. Registered on 9 April 2013.Electronic supplementary materialThe online version of this article (10.1186/s13054-018-2182-y) contains supplementary material, which is available to authorized users.
When speech is in competition with interfering sources in rooms, monaural indicators of intelligibility fail to take account of the listener's abilities to separate target speech from interfering sounds using the binaural system. In order to incorporate these segregation abilities and their susceptibility to reverberation, Lavandier and Culling [J. Acoust. Soc. Am. 127, 387-399 (2010)] proposed a model which combines effects of better-ear listening and binaural unmasking. A computationally efficient version of this model is evaluated here under more realistic conditions that include head shadow, multiple stationary noise sources, and real-room acoustics. Three experiments are presented in which speech reception thresholds were measured in the presence of one to three interferers using real-room listening over headphones, simulated by convolving anechoic stimuli with binaural room impulse-responses measured with dummy-head transducers in five rooms. Without fitting any parameter of the model, there was close correspondence between measured and predicted differences in threshold across all tested conditions. The model's components of better-ear listening and binaural unmasking were validated both in isolation and in combination. The computational efficiency of this prediction method allows the generation of complex "intelligibility maps" from room designs.
In optimal conditions, the benefit of bilateral implantation to speech intelligibility in noise can be much larger than has previously been reported. This benefit is thus considerably larger than reported benefits of summation or squelch and is robust in reverberation when the interfering source is close.
The effect of irrelevant sounds on short-term memory was investigated in two experiments using noise-vocoded speech stimuli (NVSS). Speech samples were systematically modified by a noise-vocoder and a set of stimuli varying from amplitude-modulated white noise to intelligible speech was created. Eight NVSS conditions, composed of 1-, 2-, 4-, 6-, 9-, 12-, 15-, and 18-bands, were used as the distracting stimuli in a digit-recall task next to the speech and silence conditions. The results showed that performance decreased with the number of frequency bands up to the 6-bands condition, but there was no influence of number of bands on performance beyond six bands. The results were analyzed using four acoustic metrics proposed in the literature: the frequency domain correlation coefficient (FDCC), the fluctuation strength, the speech transmission index (STI), and the normalized covariance measure (NCM). None of the metrics successfully predicted the results. However, the parameter values of the FDCC, the STI, and the NCM indicated that a prediction model for irrelevant sound effect should account for both temporal and spectral features of the irrelevant sounds.
In order to find a stress indicator that can be used to monitor stress with wearables, we compare the almost instantaneous effects of psychological stress on skin conductance, with the effects on the stress hormone cortisol, peaking about 20-30 min later. We modeled this relation deploying a convolution of the height of the skin conductance peaks with the cortisol stress response curve, and used it to determine a skin conductance-derived estimate of stress-induced cortisol. We then conducted a first experiment to validate this model, comparing the stress-induced cortisol estimates with cortisol as measured in saliva samples. Participants (N = 46) completed stressful, boring, and performance tasks in a controlled laboratory setting. Salivary cortisol samples were taken at regular moments. Based upon the pattern of measured salivary cortisol before and after the performed stressful task we divided subjects in high-cortisol responders and low-cortisol responders. For both groups, we found substantial correlations between the skin conductance-based stress-induced cortisol estimates and the measured salivary cortisol. In addition, the (Fisher-corrected) mean within-participant correlation between these variables was found to be 0.48, which proved to be significantly different from zero. These findings support the use of the skin conductance-based stress-induced cortisol estimates as a stress indicator reflecting in-body cortisol changes.
Both binaural hearing and directional microphones can improve understanding of speech in background noise if the sources of the speech and noise are spatially separated. We used a model of spatial release from masking [Jelfs, et al. (2011). Hear Res. 275, 96-104.] to predict the benefits of bilateral prostheses, directional microphones and head orientation. The model predicts large benefits of each of these factors. Measurements using selected spatial configurations in both normally hearing listeners and unilateral cochlear implantees confirmed the model's predictions. The reception thresholds for bilateral implantees were inferred using mirror-image spatial configurations to be at least 18 dB better than unilateral implantees in certain situations. Expected effects of directional microphones and head orientation were assessed through modelling spatial release from masking in a virtual restaurant situation. The model predicted marked differences between different seating positions, but in most locations, both moderate head rotations and directional microphones offered substantial benefits. Use of directional microphones generally offered larger benefits than head rotation, but there was little benefit from their combination. The addition of reverberation elevated predicted thresholds and reduced all of these effects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.