Vowels, consonants, and sentences were processed by two cochlear-implant signal-processing strategies-a fixed-channel strategy and a channel-picking strategy-and the resulting signals were presented to listeners with normal hearing for identification. At issue was the number of channels of stimulation needed in each strategy to achieve an equivalent level of speech recognition in quiet and in noise. In quiet, 8 fixed channels allowed a performance maximum for the most difficult stimulus material. A similar level of performance was reached with a 6-of-20 channel-picking strategy. In noise, 10 fixed channels allowed a performance maximum for the most difficult stimulus material. A similar level of performance was reached with a 9-of-20 strategy. Both strategies are capable of providing a very high level of speech recognition. Choosing between the two strategies may, ultimately, depend on issues that are independent of speech recognition-such as ease of device programming.
Objective
These experiments address concerns that motor vehicles in electric engine mode are so quiet that they pose a risk to pedestrians, especially those with visual impairments.
Background
The “quiet car” issue has focused on hybrid and electric vehicles, although it also applies to internal combustion engine vehicles. Previous research has focused on detectability of vehicles, mostly in quiet settings. Instead, we focused on the functional ability to perceive vehicle motion paths.
Method
Participants judged whether simulated vehicles were traveling straight or turning, with emphasis on the impact of background traffic sound.
Results
In quiet, listeners made the straight-or-turn judgment soon enough in the vehicle’s path to be useful for deciding whether to start crossing the street. This judgment is based largely on sound level cues rather than the spatial direction of the vehicle. With even moderate background traffic sound, the ability to tell straight from turn paths is severely compromised. The signal-to-noise ratio needed for the straight-or-turn judgment is much higher than that needed to detect a vehicle.
Conclusion
Although a requirement for a minimum vehicle sound level might enhance detection of vehicles in quiet settings, it is unlikely that this requirement would contribute to pedestrian awareness of vehicle movements in typical traffic settings with many vehicles present.
Application
The findings are relevant to deliberations by government agencies and automobile manufacturers about standards for minimum automobile sounds and, more generally, for solutions to pedestrians’ needs for information about traffic, especially for pedestrians with sensory impairments.
Background
The Stacked ABR (auditory brainstem response) attempts at the output of the auditory periphery to compensate for the temporal dispersion of neural activation caused by the cochlear traveling wave in response to click stimulation. Compensation can also be made at the input by using a chirp stimulus. It has been demonstrated that the Stacked ABR is sensitive to small tumors that are often missed by standard ABR latency measures.
Purpose
Because a chirp stimulus requires only a single data acquisition run whereas the Stacked ABR requires six, we try to evaluate some indirect evidence justifying the use of a chirp for small tumor detection.
Research Design
We compared the sensitivity and specificity of different Stacked ABRs formed by aligning the derived-band ABRs according to (1) the individual’s peak latencies, (2) the group mean latencies, and (3) the modeled latencies used to develop a chirp.
Results
For tumor detection with a chosen sensitivity of 95%, a relatively high specificity of 85% may be achieved with a chirp.
Conclusion
It appears worthwhile to explore the actual use of a chirp because significantly shorter test and analysis times might be possible.
ABRs obtained with chirp stimuli provide an efficient method for estimating hearing thresholds in individuals with normal hearing and sensory hearing loss where broadband signals are selected for testing. ABRs to chirps display higher peak-to-peak amplitudes than those obtained with clicks and may provide responses closer to behavioral thresholds. This information could result in improved accuracy in identifying hearing loss and estimating hearing sensitivity for broadband signals in infants, children, and difficult-to-test older populations.
Three factors account for the high level of speech understanding in quiet enjoyed by many patients fit with cochlear implants. First, some information about speech exists in the time/amplitude envelope of speech. This information is sufficient to narrow the number of word candidates for a given signal. Second, if information from the envelope of speech is available to listeners, then only minimal information from the frequency domain is necessary for high levels of speech recognition in quiet. Third, perceiving strategies for speech are inherently flexible in terms of the mapping between signal frequencies (i.e., the locations of the formants) and phonetic identity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.