Young adults are receptive to using a music video parody to promote breastfeeding, which can help to increase comfort levels with breastfeeding.
Purpose: Listeners understand significantly more speech in noise when the talker's face can be seen (visual speech) in comparison to an auditory-only baseline (a visual speech benefit). This study investigated whether the visual speech benefit is reduced when the correspondence between auditory and visual speech is uncertain and whether any reduction is affected by listener age (older vs. younger) and how severe the auditory signal is masked. Method: Older and younger adults completed a speech recognition in noise task that included an auditory-only condition and four auditory–visual (AV) conditions in which one, two, four, or six silent talking face videos were presented. One face always matched the auditory signal; the other face(s) did not. Auditory speech was presented in noise at −6 and −1 dB signal-to-noise ratio (SNR). Results: When the SNR was −6 dB, for both age groups, the standard-sized visual speech benefit reduced as more talking faces were presented. When the SNR was −1 dB, younger adults received the standard-sized visual speech benefit even when two talking faces were presented, whereas older adults did not. Conclusions: The size of the visual speech benefit obtained by older adults was always smaller when AV correspondence was uncertain; this was not the case for younger adults. Difficulty establishing AV correspondence may be a factor that limits older adults' speech recognition in noisy AV environments. Supplemental Material https://doi.org/10.23641/asha.16879549
Purpose: This study aimed to develop and test a measure of real-time continuous speech understanding to be used with natural dialogues. Method: The measure was based on a category monitoring paradigm and employed five existing recordings of natural dialogues from which the different test categories and associated target words were derived. For each dialogue, a listener was first given a semantic category and asked to press a button as quickly as possible whenever they heard an instance of the category. We tested 63 younger adults, using five semantic categories (family, media, season, temperature, and travel) at three noise levels (in quiet, 0 dB, and −5 dB signal-to-noise ratio [SNR]). Performance was measured in terms of accuracy and response time. Results: The results showed clear differences between the three noise conditions regardless of the semantic category. The peak of the response distribution was highest and earliest for the quiet condition and was reduced with decreasing SNR. The responses varied across categories, reflecting differences in the complexity of a given category or the typicality of the association between target words and their category. Broad categories and/or target words that were less directly associated with their category had decreased hit rates and increased response times. Conclusion: The results were discussed in terms of the sensitivity (hit rate) of the performance measure, as well as whether it picked up higher level semantic, context, and discourse properties of the dialogues. Supplemental Material: https://doi.org/10.23641/asha.21561681
Past research suggests that older adults expend more cognitive resources when processing visual speech than younger adults. If so, given resource limitations, older adults may not get as large a visual speech benefit as younger ones on a resource-demanding speech processing task. We tested this using a speech comprehension task that required attention across two talkers and a simple response (i.e., the question-and-answer task) and measured response time and accuracy. Specifically, we compared the size of visual speech benefit for older and younger adults. We also examined whether the presence of a visual distractor would reduce the visual speech benefit more for older than younger adults. Twenty-five older adults (12 females, MAge = 72) and 25 younger adults (17 females, MAge = 22) completed the question-and-answer task under time pressure. The task included the following conditions: auditory and visual (AV) speech; AV speech plus visual distractor; and auditory speech with static face images. Both age groups showed a visual speech benefit regardless of whether a visual distractor was also presented. Likewise, the size of the visual speech benefit did not significantly interact with age group for accuracy or the potentially more sensitive response time measure.
The current study aimed to develop and test a measure of real-time continuous speech understanding to be used with natural dialogues. The measure was based on a category monitoring paradigm and employed five existing recordings of natural dialogues from which the different test categories and associated target words were derived. For each dialogue, a listener was first given a semantic category and asked to press a button as quickly as possible whenever they heard an instance of the category. We tested 63 younger adults, using five semantic categories (Family, Media, Season, Temperature, Travel) at 3 noise levels (in quiet, 0 and -5 SNR). Performance was measured in terms of accuracy and response time. The results showed clear differences between the three noise conditions regardless of the semantic category. The peak of the response distribution was highest and earliest for the quiet condition and was reduced with decreasing SNR. The responses varied across categories, reflecting differences in the complexity of a given category or the typicality of the association between target words and their category. Broad categories and/or target words that were less directly associated with their category had decreased hit rates and increased response times. The results were discussed in terms of the sensitivity (hit rate) of the performance measure, as well as whether it picked up higher-level semantic, context and discourse properties of the dialogues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.