Recall decreases across a series of subspan immediate-recall trials but rebounds if the semantic category of the words is changed, an example of release from proactive interference (RPI). The size of the rebound depends on the semantic categories used and ranges from 0% to 95%. We used a corpus of novels to create vectors representing the meaning of about 40,000 words using the BEAGLE algorithm. The distance between categories and spread within categories jointly predicted the size of the RPI. We used a holographic model for recall equipped with a lexicon of BEAGLE vectors representing the meaning of words. The model captured RPI using a hologram as an interface to bridge information from episodic and semantic memory; it is the first account of RPI to capture release at the level of individual words in categorized lists.
We would like to thank Amy Criss and Asli Kiliç for generously sharing their data and Brendan Johns for providing the datasets of similarity and relatedness judgments. This work was supported by an ARC Discovery Early Career Research Award (DE170100106) awarded to Adam Osth. BEAGLE vectors, datasets, and model code can be found on https://osf.io/gtdqf/ GLOBAL SEMANTIC SIMILARITY EFFECTS 2 AbstractRecognition memory models posit that false alarm rates increase as the global similarity between the probe cue and the contents of memory is increased. Global similarity predictions have been commonly tested using category length designs where it has been found that false alarm rates increase as the number of studied items from a common category is increased. In this work, we explored global similarity predictions within unstructured lists of words using representations from the BEAGLE model (Jones & Mewhort, 2007). BEAGLE differs from traditional semantic space models in that it contains two types of representations: item vectors, which encode unordered co-occurrence, and order vectors, in which words are similar to the extent to which they are share neighboring words in the same relative positions. Global similarity among item and order vectors was regressed onto drift rates in the diffusion decision model (DDM: Ratcliff, 1978), which unifies both response times and accuracy. We implemented this model in a hierarchical Bayesian framework across seven datasets with lists composed of unrelated words. Results indicated clear deficits due to global similarity among item vectors, but only a minimal impact of global similarity among the order vectors.
Recognition memory models posit that performance is impaired as the similarity between the probe cue and the contents of memory is increased (global similarity). Global similarity predictions have been commonly tested using category length designs, in which the number of items from a common taxonomic or associative category is manipulated. Prior work has demonstrated that increases in the length of associative categories show clear detriments on performance, but that result is found only inconsistently for taxonomic categories. In this work, we explored global similarity predictions using representations from the BEAGLE model (Jones & Mewhort, 2007). BEAGLE’s two types of word representations, item and order vectors, exhibit similarity relations that resemble relations among associative and taxonomic category members, respectively. Global similarity among item and order vectors was regressed onto drift rates in the diffusion decision model (DDM: Ratcliff, 1978), which simultaneously accounts for both response times and accuracy. We implemented this model in a hiearchical Bayesian framework across seven datasets with lists composed of unrelated words. Results indicated clear deficits due to global similarity among item vectors, suggesting that lists of unrelated words exhibit semantic structure that impairs performance. However, there were relatively small influences of global similarity among the order vectors. These results are consistent with prior work suggesting associative similarity causes stronger performance impairments than taxonomic similarity.
Without having seen a bigram like “her buffalo”, you can easily tell that it is congruent because “buffalo” can be aligned with more common nouns like “cat” or “dog” that have been seen in contexts like “her cat” or “her dog”—the novel bigram structurally aligns with representations in memory. We present a new class of associative nets we call Dynamic-Eigen-Nets, and provide simulations that show how they generalize to patterns that are structurally aligned with the training domain. Linear-Associative-Nets respond with the same pattern regardless of input, motivating the introduction of saturation to facilitate other response states. However, models using saturation cannot readily generalize to novel, but structurally aligned patterns. Dynamic-Eigen-Nets address this problem by dynamically biasing the eigenspectrum towards external input using temporary weight changes. We demonstrate how a two-slot Dynamic-Eigen-Net trained on a text corpus provides an account of bigram judgment-of-grammaticality and lexical decision tasks, showing it can better capture syntactic regularities from the corpus compared to the Brain-State-in-a-Box and the Linear-Associative-Net. We end with a simulation showing how a Dynamic-Eigen-Net is sensitive to syntactic violations introduced in bigrams, even after the associations that encode those bigrams are deleted from memory. Over all simulations, the Dynamic-Eigen-Net reliably outperforms the Brain-State-in-a-Box and the Linear-Associative-Net. We propose Dynamic-Eigen-Nets as associative nets that generalize at retrieval, instead of encoding, through recurrent feedback.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.