A large number of studies have demonstrated that semantic richness dimensions [e.g., number of features, semantic neighborhood density, semantic diversity , concreteness, emotional valence] influence word recognition processes. Some of these richness effects appear to be task-general, while others have been found to vary across tasks. Importantly, almost all of these findings have been found in the visual word recognition literature. To address this gap, we examined the extent to which these semantic richness effects are also found in spoken word recognition, using a megastudy approach that allows for an examination of the relative contribution of the various semantic properties to performance in two tasks: lexical decision, and semantic categorization. The results show that concreteness, valence, and number of features accounted for unique variance in latencies across both tasks in a similar direction—faster responses for spoken words that were concrete, emotionally valenced, and with a high number of features—while arousal, semantic neighborhood density, and semantic diversity did not influence latencies. Implications for spoken word recognition processes are discussed.
Recent theories propose that language-switching in bilinguals influences executive control. We investigated whether switching behaviour, shaped by the bilingual's interactional context as well as personal preferences impacted attentional control. We compared four groups – (i) Edinburgh monolinguals, (ii) Edinburgh non-switching late bilinguals, (iii) Edinburgh non-switching early bilinguals, and (iv) Singapore switching early bilinguals – on two tasks of attentional control. Effects of interactional context were observed, with Singapore bilinguals performing better on conflict resolution in the Attention Network Task and Edinburgh late bilinguals on attentional switching in the Elevator reversal (Test of Everyday Attention) subtest. Our results suggest that the interactional context of bilinguals could impact attentional control differently.
The author investigated voice context effects in recognition memory for words spoken by multiple talkers by comparing performance when studied words were repeated with same, different, or new voices at test. Hits and false alarms increased when words were tested with studied voices compared with unstudied voices. Discrimination increased only when the exact same voice was used. A trend toward conservatism in response bias was observed when test words switched to increasingly unfamiliar voices. Taken together, the overall findings suggest that the voice-specific attributes of individual talkers are preserved in long-term memory. Implications for the role of instance-specific matching and voice-specific familiarity processes and the nature of spoken-word representation are discussed.
Psycholinguists have developed a number of measures to tap different aspects of a word's semantic representation. The influence of these measures on lexical processing has collectively been described as semantic richness effects. However, the effects of these word properties on memory are currently not well understood. This study examines the relative contributions of lexical and semantic variables in free recall and recognition memory at the item-level, using a megastudy approach. Hierarchical regression of recall and recognition performance on a number of lexical-semantic variables showed task-general effects where the structural component, frequency, number of senses, and arousal accounted for unique variance in both free recall and recognition memory. Task-specific effects included number of features, imageability, and body-object interaction, which accounted for unique variance in recall, whereas age of acquisition, familiarity, and extremity of valence accounted for unique variance in recognition. Forward selection regression analyses generally converged on these findings. Hierarchical regression also revealed that lexical variables accounted for more variance in recognition compared with recall, whereas semantic variables accounted for more unique variance above and beyond lexical variables in recall compared with recognition. Implications of the findings are discussed.
With a new metric called phonological Levenshtein distance (PLD20), the present study explores the effects of phonological similarity and word frequency on spoken word recognition, using polysyllabic words that have neither phonological nor orthographic neighbors, as defined by neighborhood density (the N-metric). Inhibitory effects of PLD20 were observed for these lexical hermits: Close-PLD20 words were recognized more slowly than distant PLD20 words, indicating lexical competition. Importantly, these inhibitory effects were found only for low-(not high-) frequency words, in line with previous findings that phonetically related primes inhibit recognition of low-frequency words. These results indicate that the properties of PLD20-a continuous measure of word-form similarity-make it a promising new metric for quantifying phonological distinctiveness in spoken word recognition research.
The view that successful memory performance depends importantly on the extent to which there is a match between the encoding and retrieval conditions is commonplace in memory research. However, Nairne (Memory, 10, 389-395, 2002) proposed that this idea about trace-cue compatibility being the driving force behind memory retention is a myth, because one cannot make unequivocal predictions about performance by appealing to the encoding-retrieval match. What matters instead is the relative diagnostic value of the match, and not the absolute match. Three experiments were carried out in which participants memorised word pairs and tried to recall target words when given retrieval cues. The diagnostic value of the cue was varied by manipulating the extent to which the cues subsumed other memorised words and the level of the encoding-retrieval match. The results supported Nairne's (Memory, 10, 389-395, 2002) assertion that the diagnostic value of retrieval cues is a better predictor of memory performance than the absolute encoding-retrieval match.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.