The analysis of breathing sounds measured over the extrathoracic trachea offers a noninvasive technique to monitor obstructions of the respiratory tract. Essential to development of this technique is a quantitative understanding of how such tracheal sounds are related to the underlying tract anatomy, airflow, and disease-induced obstructions. In this study, the first dynamic acoustic model of the respiratory tract was developed that takes into consideration such factors as turbulent sound sources and varying glottal aperture. Model predictions were compared to tracheal sounds measured on four healthy subjects at target flow rates of 0.5, 1.0, 1.5, and 2.0 L/s, and also during nontargeted breathing. Both the simulation and measurement spectra depicted increasing sound power with increasing flow, with smaller incremental increases at the higher flow rates. A sound power increase of approximately 30 dB between a flow rate of 0.5 and 2.0 L/s was observed in both the simulated and measured spectra. Variations of as much as 15 dB over the 300-600 Hz frequency band were noted in the sound power produced during targeted and nontargeted breathing maneuvers at the same flow rates. We propose that this variability was in part due to changes in glottal aperture area, which is known to vary during normal respiration and has been observed as a method of flow control. Model simulations incorporating a turbulent source at the glottis with respiratory cycle variations in glottal aperture from 0.64 cm2 to 1.4 cm2 explained approximately 10 dB of the measured variation. This study provides the first links between spatially distributed sound sources due to turbulent flow in the respiratory tract and noninvasive tracheal sounds measurements.
Few devices exist to aid in the training of pitch, intensity, and rhythm of speech. The Interactive prosody training workstation (PW) employs state-of-the-art technology to assist users in achieving clinician- or client-programmable targets in each of these features. Two training interfaces are currently implemented. The simpler displays F0 and/or intensity in real time as a fluctuating one- or two-dimensional display. Smoothing can be adjusted to accommodate varying levels of voice variations. With the more advanced interface, model utterances are presented using stored LPC-coded speech. The user’s response is then compared to the model, using any of a wide variety of scoring methods. The user’s response may be played, in comparison to the model, as often as desired. A demonstration tape will be played showing one hearing impaired individual with a cochlear implant using the PW first to find his appropriate F0 register and then practicing pitch gestures appropriate for speech. Strong carryover of learning to later sessions is demonstrated. [Work supported in part by an SBIR grant from NIH.]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.