Perceptual learning in younger and older adults 2 Abstract Purpose: This study examined whether older adults remain perceptually flexible when presented with ambiguities in speech in the absence of lexically disambiguating information. We expected older adults to show less perceptual learning when top-down information was not available. We also investigated whether individual differences in executive function predicted perceptual learning in older and younger adults.Method: Younger (n=31) and older adults (n=27) completed two perceptual learning tasks comprised of a pretest, exposure, and posttest phase. Both learning tasks exposed participants to clear and ambiguous speech tokens, but crucially, the lexically-guided learning task provided disambiguating lexical information, while the distributional learning task did not. Participants also performed several cognitive tasks to investigate individual differences in working memory, vocabulary, and attention-switching control. Results:We found that perceptual learning is maintained in older adults, but that learning may be stronger in contexts where top-down information is available. Receptive vocabulary scores predicted learning across both age groups and in both learning tasks.Conclusions: Implicit learning is maintained with age across different learning conditions, but remains stronger when lexically biasing information is available. We find that receptive vocabulary is relevant for learning in both types of learning tasks, suggesting the importance of vocabulary knowledge for adapting to ambiguities in speech.Perceptual learning in younger and older adults 3
Purpose Listening effort is quickly becoming an important metric for assessing speech perception in less-than-ideal situations. However, the relationship between the construct of listening effort and the measures used to assess it remains unclear. We compared two measures of listening effort: a cognitive dual task and a physiological pupillometry task. We sought to investigate the relationship between these measures of effort and whether engaging effort impacts speech accuracy. Method In Experiment 1, 30 participants completed a dual task and a pupillometry task that were carefully matched in stimuli and design. The dual task consisted of a spoken word recognition task and a visual match-to-sample task. In the pupillometry task, pupil size was monitored while participants completed a spoken word recognition task. Both tasks presented words at three levels of listening difficulty (unmodified, eight-channel vocoding, and four-channel vocoding) and provided response feedback on every trial. We refined the pupillometry task in Experiment 2 ( n = 31); crucially, participants no longer received response feedback. Finally, we ran a new group of subjects on both tasks in Experiment 3 ( n = 30). Results In Experiment 1, accuracy in the visual task decreased with increased signal degradation in the dual task, but pupil size was sensitive to accuracy and not vocoding condition. After removing feedback in Experiment 2, changes in pupil size were predicted by listening condition, suggesting the task was now sensitive to engaged effort. Both tasks were sensitive to listening difficulty in Experiment 3, but there was no relationship between the tasks and neither task predicted speech accuracy. Conclusions Consistent with previous work, we found little evidence for a relationship between different measures of listening effort. We also found no evidence that effort predicts speech accuracy, suggesting that engaging more effort does not lead to improved speech recognition. Cognitive and physiological measures of listening effort are likely sensitive to different aspects of the construct of listening effort. Supplemental Material https://doi.org/10.23641/asha.16455900
Word recognition is a gateway to language whose cognitive mechanisms are well understood in typical listeners. However, cognitive science has not identified the fundamental dimensions along which processing varies across people or contexts. This is necessary for developing universal theories that fully capture the range of language function; it can inform intervention and assessment; and it is relevant for understanding the links between hearing loss and cognitive decline. This study sought to identify these dimensions in a heterogenous population of Cochlear Implant users. We characterized millisecond-by-millisecond word recognition using the Visual World Paradigm. A principal component analysis revealed three dimensions of processing that mirror prior small-scale studies. Each dimension was predicted by different auditory and demographic factors, and each predicted outcomes over and above auditory fidelity. Thus, real-time language processing varies along a small number of dimensions, which explain variable real-world outcomes of older individuals and people with hearing loss.
Spoken word recognition is a complex cognitive process that underpins efficient language processing. When recognizing words, listeners must quickly map the spoken input to stored lexical candidates as speech unfolds over time. Older adults experience two declines that could impact word recognition: hearing loss and cognitive decline. To recognize words, listeners must be able to accurately hear the input, and they must resolve competition between lexical candidates, though it is unclear whether this resolution derives from domain-general or language specific mechanisms at any age. Thus, spoken word recognition may provide a crucial mediator between hearing loss and general cognitive declines in older adults. We examined online spoken word recognition in a continuous age sample from adolescence to older adulthood (N=104, ages 11 – 79). Hearing thresholds were collected from all participants at octave frequencies between 0.25 - 8 kHz. Participants completed a Visual World Paradigm (VWP) task to examine the dynamics of lexical competition and a non-linguistic analogue of the VWP to capture domain-general visual cognition and speed of processing. Spoken word recognition increased in efficiency through the 20’s, but began to decline around age 45. A hierarchical regression showed that age predicted the efficiency of activating target words over and above visual cognition and peripheral hearing. This suggests that age uniquely affects word recognition, separate from hearing ability and age-related changes to visual cognition. This highlights spoken word recognition as a potential early marker for more severe cognitive decline in the future.
Classical psycholinguistics seeks a universal set of language processing mechanisms for all people. This relies on the “modal” listener: hearing, neurotypical, monolingual, young adults. Applied psycholinguistics then characterizes differences in terms of their deviation from modal. This approach mirrors naturalist philosophies of health which presume a normal function, and ill health as a deviation. In contrast, functionalist positions argue that ill health is in part culturally derived and occurs when a person cannot meet socio-culturally defined goals. This separates differences in underlying biology (disease) from socio-cultural function (illness). Functionalism offers a needed alternative for psycholinguistics, given that few people fit the modal definition. In contrast to psychometric measures—which are culturally defined—a process-based approach can yield more insight. We illustrate that with work examining word recognition across multiple domains: cochlear implant users, children, language disorders, L2 learners, and aging. This work seeks to understand each group’s solutions to the problem of word recognition as interesting in its own right. Variation in process is value-neutral, even as psychometric measures assess fit with cultural expectations (e.g., disease vs. illness). By examining variation in processing across people with a variety of skills and goals, we arrive at deeper insight into fundamental principles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.