Objectives:Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception.Design:Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions.Results:Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements.Conclusions:Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation.
Across many diverse areas of research, it is common to average a series of observations, and to use these averages in subsequent analyses. Research using this approach faces the challenge of knowing when these averages are stable. Meaning, to what extent do these averages change when additional observations are included? Using averages that are not stable introduces a great deal of error into any analysis. The current research develops a tool, implemented in R, to assess when averages are stable. Using a sequential sampling approach, it determines how many observations are needed before additional observations would no longer meaningfully change an average. The utility of this tool is illustrated in the context of impression formation, demonstrating that averages of some perceived traits (e.g., happy) stabilize with fewer observations than others (e.g., assertive). A tutorial regarding how to utilize this tool in researchers’ own data is provided.
The Emoti-Chair is a sensory substitution system that brings a high-resolution audio-tactile version of music to the body. The system can be used to improve music accessibility for deaf or hard of hearing people, while offering everyone the chance to experience sounds as tactile sensations. The model human cochlea (MHC) is the sensory substitution system that drives the Emoti-Chair. Music can be experienced as a tactile modality, revealing vibrations that originate from different instruments and sounds spanning the audio frequency spectrum along multiple points of the body. The system uses eight separate audio-tactile channels to deliver sound to the body, and provides an opportunity to experience a broad range of musical elements as physical vibrations.
The CSA Standard CAN/CSA-Z107.56-06 (R2011) "Procedures for the Measurement of Occupational Noise Exposure" deals with noise exposures found in industrial settings, where in most situations, the noise source is in the far field. The Standard also provides procedures for the measurement in situations where the noise sources include sources in the near field, which is the case with headsets. The procedures involve the use of sophisticated equipment and techniques that are generally difficult to implement in the workplace. However, the Standard also provides a simple calculation method that only requires the measurement of the background noise level using a sound level meter or a dosimeter. The calculation method assumes a signal-to-noise ratio (S/N) of 15 dBA, to ensure the most comfortable listening level for speech understanding. The noise exposure level of the ear under the headset is thus obtained as the sum of the background noise level (corrected for headset attenuation and duration of the signal) plus 15 dBA for the S/N. The objective of the present study was to assess the validity of the calculation method under different background noise conditions. Three different background noises were played at three sound levels. The noise exposure level under two headsets with different attenuations was assessed using a speech in noise paradigm. Participants were asked to adjust the signal level to comfortably understand the speech. The increase in sound level was measured for each combination of parameters using an artificial ear.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.