We investigate the short-term association between multidimensional acoustic characteristics of everyday ambient sound and continuous mean heart rate. We used in-market data from hearing aid users who logged ambient acoustics via smartphone-connected hearing aids and continuous mean heart rate in 5 min intervals from their own wearables. We find that acoustic characteristics explain approximately 4% of the fluctuation in mean heart rate throughout the day. Specifically, increases in ambient sound pressure intensity are significantly related to increases in mean heart rate, corroborating prior laboratory and short-term real-world data. In addition, increases in ambient sound quality—that is, more favourable signal to noise ratios—are associated with decreases in mean heart rate. Our findings document a previously unrecognized mixed influence of everyday sounds on cardiovascular stress, and that the relationship is more complex than is seen from an examination of sound intensity alone. Thus, our findings highlight the relevance of ambient environmental sound in models of human ecophysiology.
Although orientation coding in the human visual system has been researched with simple stimuli, little is known about how orientation information is represented while viewing complex images. We show that, similar to findings with simple Gabor textures, the visual system involuntarily discounts orientation noise in a wide range of natural images, and that this discounting produces a dipper function in the sensitivity to orientation noise, with best sensitivity at intermediate levels of pedestal noise. However, the level of this discounting depends on the complexity and familiarity of the input image, resulting in an image-class-specific threshold that changes the shape and position of the dipper function according to image class. These findings do not fit a filter-based feed-forward view of orientation coding, but can be explained by a process that utilizes an experience-based perceptual prior of the expected local orientations and their noise. Thus, the visual system encodes orientation in a dynamic context by continuously combining sensory information with expectations derived from earlier experiences.
The prediction model evaluates pigmented skin lesions with regards to the overall shape, border and colour distribution with a total of nine different discriminating parameters. The prediction model outputs an index score, and by using the optimal threshold value, a diagnostic accuracy of 77% in discriminating between malignant and benign skin lesions was obtained. This is an improvement compared with the naked-eye analysis performed by professionals, rendering the system a significant assistance in detecting malignant cutaneous melanoma.
Ideally, public health policies are formulated from scientific data; however, policy-specific data are often unavailable. Big data can generate ecologically-valid, high-quality scientific evidence, and therefore has the potential to change how public health policies are formulated. Here, we discuss the use of big data for developing evidence-based hearing health policies, using data collected and analyzed with a research prototype of a data repository known as EVOTION (EVidence-based management of hearing impairments: public health pOlicy-making based on fusing big data analytics and simulaTION), to illustrate our points. Data in the repository consist of audiometric clinical data, prospective real-world data collected from hearing aids and an app, and responses to questionnaires collected for research purposes. To date, we have used the platform and a synthetic dataset to model the estimated risk of noiseinduced hearing loss and have shown novel evidence of ways in which external factors influence hearing aid usage patterns. We contend that this research prototype data repository illustrates the value of using big data for policy-making by providing high-quality evidence that could be used to formulate and evaluate the impact of hearing health care policies.
Purpose The purpose of this study was to investigate real-life benefit from directional microphone and noise reduction (“noise management” [NM]) processing using retrospective self-reports and smartphone-based ecological momentary assessments (EMAs) combined with logging of the acoustic environments. Method A single-blinded, counterbalanced crossover design was used. Eleven hearing-impaired adults were bilaterally fitted with behind-the-ear devices with NM either activated (NM ON ) or deactivated. For the retrospective self-reports, the short scale of the Speech, Spatial, and Qualities Hearing Scale questionnaire (SSQ12) was applied. For the EMAs, smartphone-based self-reports combined with hearing aid (HA)–based classifications of the listening environments (“soundscapes”) experienced by the participants was used. To explore potential associations with the real-life data, two laboratory measures of aided speech recognition in noise were administered. Results The soundscapes in which the participants submitted their EMAs were representative of the soundscapes they experienced during normal HA use and of the soundscapes reported in the literature for older HA users. The SSQ12 and EMA scores both showed an overall benefit from NM ON . The EMA scores, together with the logged acoustic data, revealed that this benefit was driven by NM ON being preferred particularly in listening environments classified as “speech” or “speech in noise.” The laboratory measures of aided speech recognition in noise were unable to predict the real-life data. Conclusions EMA combined with acoustic data-logging is suited for more targeted evaluations of real-life HA benefit. Advanced NM settings can provide subjective user benefits in specific listening situations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.