Abstract:Four experiments used signal detection analyses to assess recognition memory for lists of words consisting of differing numbers of exemplars from different semantic categories. The results showed that recognition memory performance, measured by d(a), (a) increased as category length (CL, the number of study-list items selected from the same semantic category) increased from 1 to 8 but then decreased as CL further increased from 8 to 14, (b) was greater when 2 studied items from the same category occurred back … Show more
“…In category length (CL) designs, the increase in the false alarm rate (FAR) with increasing CL is robust across a wide range of studies (e.g., Cho & Neely, 2013;Criss & Shiffrin, 2004;Dennis & Chapman, 2010;Neely & Tse, 2009;Robinson & Roediger, 1997;Shiffrin et al, 1995). The effects on the hit rate (HR) are somewhat less consistent.…”
Section: Similarity Effects In Global Matching Modelsmentioning
We would like to thank Amy Criss and Asli Kiliç for generously sharing their data and Brendan Johns for providing the datasets of similarity and relatedness judgments. This work was supported by an ARC Discovery Early Career Research Award (DE170100106) awarded to Adam Osth. BEAGLE vectors, datasets, and model code can be found on https://osf.io/gtdqf/ GLOBAL SEMANTIC SIMILARITY EFFECTS 2 AbstractRecognition memory models posit that false alarm rates increase as the global similarity between the probe cue and the contents of memory is increased. Global similarity predictions have been commonly tested using category length designs where it has been found that false alarm rates increase as the number of studied items from a common category is increased. In this work, we explored global similarity predictions within unstructured lists of words using representations from the BEAGLE model (Jones & Mewhort, 2007). BEAGLE differs from traditional semantic space models in that it contains two types of representations: item vectors, which encode unordered co-occurrence, and order vectors, in which words are similar to the extent to which they are share neighboring words in the same relative positions. Global similarity among item and order vectors was regressed onto drift rates in the diffusion decision model (DDM: Ratcliff, 1978), which unifies both response times and accuracy. We implemented this model in a hierarchical Bayesian framework across seven datasets with lists composed of unrelated words. Results indicated clear deficits due to global similarity among item vectors, but only a minimal impact of global similarity among the order vectors.
“…In category length (CL) designs, the increase in the false alarm rate (FAR) with increasing CL is robust across a wide range of studies (e.g., Cho & Neely, 2013;Criss & Shiffrin, 2004;Dennis & Chapman, 2010;Neely & Tse, 2009;Robinson & Roediger, 1997;Shiffrin et al, 1995). The effects on the hit rate (HR) are somewhat less consistent.…”
Section: Similarity Effects In Global Matching Modelsmentioning
We would like to thank Amy Criss and Asli Kiliç for generously sharing their data and Brendan Johns for providing the datasets of similarity and relatedness judgments. This work was supported by an ARC Discovery Early Career Research Award (DE170100106) awarded to Adam Osth. BEAGLE vectors, datasets, and model code can be found on https://osf.io/gtdqf/ GLOBAL SEMANTIC SIMILARITY EFFECTS 2 AbstractRecognition memory models posit that false alarm rates increase as the global similarity between the probe cue and the contents of memory is increased. Global similarity predictions have been commonly tested using category length designs where it has been found that false alarm rates increase as the number of studied items from a common category is increased. In this work, we explored global similarity predictions within unstructured lists of words using representations from the BEAGLE model (Jones & Mewhort, 2007). BEAGLE differs from traditional semantic space models in that it contains two types of representations: item vectors, which encode unordered co-occurrence, and order vectors, in which words are similar to the extent to which they are share neighboring words in the same relative positions. Global similarity among item and order vectors was regressed onto drift rates in the diffusion decision model (DDM: Ratcliff, 1978), which unifies both response times and accuracy. We implemented this model in a hierarchical Bayesian framework across seven datasets with lists composed of unrelated words. Results indicated clear deficits due to global similarity among item vectors, but only a minimal impact of global similarity among the order vectors.
“…Al though controlled studies have found no effect of category length using taxonomic categories (Cho & Neely, 2013;Neely & Tse, 2009), Maguire et al (2010) found a large effect of large effect of category length on 2AFC performance for associative categories while finding no effect of category length when taxonomic cate gories are used. Similarly, the studies that have found DRM effects, which are possibly the biggest false memory effects found in list memory paradigms, often use associatively related catego ries (Robinson & Roediger, 1997;Roediger & McDermott, 1995).…”
Section: Arguments For Item Noise Modelsmentioning
A powerful theoretical framework for exploring recognition memory is the global matching framework, in which a cue's memory strength reflects the similarity of the retrieval cues being matched against the contents of memory simultaneously. Contributions at retrieval can be categorized as matches and mismatches to the item and context cues, including the self match (match on item and context), item noise (match on context, mismatch on item), context noise (match on item, mismatch on context), and background noise (mismatch on item and context). We present a model that directly parameterizes the matches and mismatches to the item and context cues, which enables estimation of the magnitude of each interference contribution (item noise, context noise, and background noise). The model was fit within a hierarchical Bayesian framework to 10 recognition memory datasets that use manipulations of strength, list length, list strength, word frequency, study-test delay, and stimulus class in item and associative recognition. Estimates of the model parameters revealed at most a small contribution of item noise that varies by stimulus class, with virtually no item noise for single words and scenes. Despite the unpopularity of background noise in recognition memory models, background noise estimates dominated at retrieval across nearly all stimulus classes with the exception of high frequency words, which exhibited equivalent levels of context noise and background noise. These parameter estimates suggest that the majority of interference in recognition memory stems from experiences acquired before the learning episode.
“…In contrast, item-noise models predict interference from other items, especially from related items. For instance, category-length manipulations affect recognition (Criss & Shiffrin, 2004;cf, Dennis & Chapman, 2010;Neely & Tse, 2009;Shiffrin, Huber, & Marinelli, 1995) and varying the proportion of high-frequency versus low-frequency words on a study list affects recognition accuracy (Dorfman & Glanzer, 1988;Malmberg & Murnane, 2002).…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.