Abstract:Successful communication is vital to active aging and well-being, yet virtually all older adults find it challenging to communicate effectively in noisy environments. The resulting discomfort and frustration can prompt withdrawal or avoidance of social situations, which, in turn, can severely limit the range of activities available to older adults and lead to a less active and satisfying lifestyle, and, in some cases, depression. Using the International Classification of Functioning, Disability and Health's (I… Show more
“…In other words, older adults needed speech to be presented at ∼2.2 dB SNR louder than young adults to reach their recognition threshold in noise. These results are in line with the abundant literature on speech perception in noise ( Heinrich et al, 2016 ). Semantics: Age-related changes in semantic emotion recognition follow findings on spoken word recognition.…”
Section: Discussionsupporting
confidence: 92%
“…Communication in older age is essential to maintain quality of life, cognitive skills, and emotional wellbeing ( Heinrich et al, 2016 ; Livingston et al, 2017 ). Abundant evidence suggests that speech processing is impaired in aging, with severe implications ( Helfer et al, 2017 ).…”
Older adults process emotions in speech differently than do young adults. However, it is unclear whether these age-related changes impact all speech channels to the same extent, and whether they originate from a sensory or a cognitive source. The current study adopted a psychophysical approach to directly compare young and older adults’ sensory thresholds for emotion recognition in two channels of spoken-emotions: prosody (tone) and semantics (words). A total of 29 young adults and 26 older adults listened to 50 spoken sentences presenting different combinations of emotions across prosody and semantics. They were asked to recognize the prosodic or semantic emotion, in separate tasks. Sentences were presented on the background of speech-spectrum noise ranging from SNR of −15 dB (difficult) to +5 dB (easy). Individual recognition thresholds were calculated (by fitting psychometric functions) separately for prosodic and semantic recognition. Results indicated that: (1). recognition thresholds were better for young over older adults, suggesting an age-related general decrease across channels; (2). recognition thresholds were better for prosody over semantics, suggesting a prosodic advantage; (3). importantly, the prosodic advantage in thresholds did not differ between age groups (thus a sensory source for age-related differences in spoken-emotions processing was not supported); and (4). larger failures of selective attention were found for older adults than for young adults, indicating that older adults experienced larger difficulties in inhibiting irrelevant information. Taken together, results do not support a sole sensory source, but rather an interplay of cognitive and sensory sources for age-related differences in spoken-emotions processing.
“…In other words, older adults needed speech to be presented at ∼2.2 dB SNR louder than young adults to reach their recognition threshold in noise. These results are in line with the abundant literature on speech perception in noise ( Heinrich et al, 2016 ). Semantics: Age-related changes in semantic emotion recognition follow findings on spoken word recognition.…”
Section: Discussionsupporting
confidence: 92%
“…Communication in older age is essential to maintain quality of life, cognitive skills, and emotional wellbeing ( Heinrich et al, 2016 ; Livingston et al, 2017 ). Abundant evidence suggests that speech processing is impaired in aging, with severe implications ( Helfer et al, 2017 ).…”
Older adults process emotions in speech differently than do young adults. However, it is unclear whether these age-related changes impact all speech channels to the same extent, and whether they originate from a sensory or a cognitive source. The current study adopted a psychophysical approach to directly compare young and older adults’ sensory thresholds for emotion recognition in two channels of spoken-emotions: prosody (tone) and semantics (words). A total of 29 young adults and 26 older adults listened to 50 spoken sentences presenting different combinations of emotions across prosody and semantics. They were asked to recognize the prosodic or semantic emotion, in separate tasks. Sentences were presented on the background of speech-spectrum noise ranging from SNR of −15 dB (difficult) to +5 dB (easy). Individual recognition thresholds were calculated (by fitting psychometric functions) separately for prosodic and semantic recognition. Results indicated that: (1). recognition thresholds were better for young over older adults, suggesting an age-related general decrease across channels; (2). recognition thresholds were better for prosody over semantics, suggesting a prosodic advantage; (3). importantly, the prosodic advantage in thresholds did not differ between age groups (thus a sensory source for age-related differences in spoken-emotions processing was not supported); and (4). larger failures of selective attention were found for older adults than for young adults, indicating that older adults experienced larger difficulties in inhibiting irrelevant information. Taken together, results do not support a sole sensory source, but rather an interplay of cognitive and sensory sources for age-related differences in spoken-emotions processing.
“…Third, the listening span task (LSPAN; Conway et al, 2005) and the Trail Making Test (Reitan, 1992) were administered to examine working memory capacity and executive function, respectively. These cognitive tasks were administered for two reasons: As attentional resources are important for both working memory and executive function, any age differences in performance on these tasks would support the claim that OAs have smaller attentional resource capacities than YAs (Craik & Byrd, 1982;Heinrich et al, 2016). The other reason is that by evaluating the relationship between these cognitive measures and the performance on the current paradigm, there is an opportunity to obtain data that can then be used to extend existing models used within cognitive hearing science (i.e., the Ease of Language Understanding [ELU] model and the Framework for Understanding Effortful Listening), since these do not consider processes that may be involved in establishing the correspondence of AV speech.…”
Purpose:
Listeners understand significantly more speech in noise when the talker's face can be seen (visual speech) in comparison to an auditory-only baseline (a visual speech benefit). This study investigated whether the visual speech benefit is reduced when the correspondence between auditory and visual speech is uncertain and whether any reduction is affected by listener age (older vs. younger) and how severe the auditory signal is masked.
Method:
Older and younger adults completed a speech recognition in noise task that included an auditory-only condition and four auditory–visual (AV) conditions in which one, two, four, or six silent talking face videos were presented. One face always matched the auditory signal; the other face(s) did not. Auditory speech was presented in noise at −6 and −1 dB signal-to-noise ratio (SNR).
Results:
When the SNR was −6 dB, for both age groups, the standard-sized visual speech benefit reduced as more talking faces were presented. When the SNR was −1 dB, younger adults received the standard-sized visual speech benefit even when two talking faces were presented, whereas older adults did not.
Conclusions:
The size of the visual speech benefit obtained by older adults was always smaller when AV correspondence was uncertain; this was not the case for younger adults. Difficulty establishing AV correspondence may be a factor that limits older adults' speech recognition in noisy AV environments.
Supplemental Material
https://doi.org/10.23641/asha.16879549
“…Social distancing incurred significant life changes that could be experienced as negative or positive, such as losing or changing jobs and un/healthy lifestyle changes (6). Restrictions also severely disrupted social interactions, social presence, communication and daily routines, all important to maintain cognitive performance and wellbeing [see (7,8)]. Taken together, social restrictions have been found to impair mental health, including an increase in anxiety, depressive symptoms, loneliness and social isolation (9)(10)(11).…”
ObjectivesThe aim of the current study was to identify difficulties in adapting to normal life once COVID-19 lockdown has been lifted. Israel was used as a case study, as COVID-19 social restrictions, including a nation-wide lockdown, were lifted almost completely by mid-April 2021, following a large-scale vaccination operation.MethodsA sample of 293 mid-age and older Israeli adults (M age = 61.6 ± 12.8, range 40–85 years old) reported on return-to-routine adaptation difficulties (on a novel index), depression, positive solitude, and several demographic factors.ResultsOf the participants, 40.4% met the criteria of (at least) mild depressive symptoms. Higher levels of adaptation difficulties were related to higher ratios of clinical depressive symptoms. This link was moderated by positive solitude. Namely, the association between return-to-routine adaptation difficulties and depression was mainly indicated for individuals with low positive solitude.ConclusionsThe current findings are of special interest to public welfare, as adaptation difficulties were associated with higher chance for clinical depressive symptoms, while positive solitude was found to be as an efficient moderator during this period. The large proportion of depressive symptoms that persist despite lifting of social restrictions should be taken into consideration by policy makers when designing return-to-routine plans.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.