We present three artificial-grammar experiments. The first used position constraints, and the second used sequential constraints. The third varied both the amount of training and the degree of sequential constraint. Increasing both the amount of training and the redundancy of the grammar benefited participants' ability to infer grammatical status; nevertheless, they were unable to describe the grammar. We applied a multitrace model of memory to the task. The model used a global measure of similarity to assess the grammatical status of the probe and captured performance both in our experiments and in three classic studies from the literature. The model shows that retrieval is sensitive to structure in memory, even when individual exemplars are encoded sparsely. The work ties an understanding of performance in the artificial-grammar task to the principles used to understand performance in episodic-memory tasks.
Distributional semantic models (DSMs) specify learning mechanisms with which humans construct a deep representation of word meaning from statistical regularities in language. Despite their remarkable success at fitting human semantic data, virtually all DSMs may be classified as prototype models in that they try to construct a single representation for a word's meaning aggregated across contexts. This prototype representation conflates multiple meanings and senses of words into a center of tendency, often losing the subordinate senses of a word in favor of more frequent ones. We present an alternative instance-based DSM based on the classic MINERVA 2 multiple-trace model of episodic memory. The model stores a representation of each language instance in a corpus, and a word's meaning is constructed on-the-fly when presented with a retrieval cue. Across two experiments with homonyms in both an artificial and natural language corpus, we show how the instance-based model can naturally account for the subordinate meanings of words in appropriate context due to nonlinear activation over stored instances, but classic prototype DSMs cannot. The instance-based account suggests that meaning may not be something that is created during learning or stored per se, but may rather be an artifact of retrieval from an episodic memory store.
The collection of very large text sources has revolutionized the study of natural language, leading to the development of several models of language learning and distributional semantics that extract sophisticated semantic representations of words based on the statistical redundancies contained within natural language (e.g., Griffiths, Steyvers, & Tenenbaum, ; Jones & Mewhort, ; Landauer & Dumais, ; Mikolov, Sutskever, Chen, Corrado, & Dean, ). The models treat knowledge as an interaction of processing mechanisms and the structure of language experience. But language experience is often treated agnostically. We report a distributional semantic analysis that shows written language in fiction books varies appreciably between books from the different genres, books from the same genre, and even books written by the same author. Given that current theories assume that word knowledge reflects an interaction between processing mechanisms and the language environment, the analysis shows the need for the field to engage in a more deliberate consideration and curation of the corpora used in computational studies of natural language processing.
We present a serial reaction time (SRT) task in which participants identified the location of a target by pressing a key mapped to the location. The location of successive targets was determined by the rules of a grammar, and we varied the redundancy of the grammar. Increasing both practice and the redundancy of the grammar reduced response time, but the participants were unable to describe the grammar. Such results are usually discussed as examples of implicit learning. Instead, we treat performance in terms of retrieval from a multitrace memory. In our account, after each trial, participants store a trace comprising the current stimulus, the response associated with it, and the context provided by the immediately preceding response. When a target is presented, it is used as a prompt to retrieve the response mapped to it. As participants practise the task, the redundancy of the series helps point to the correct response and, thereby, speeds retrieval of the response. The model captured performance in the experiment and in classic SRT studies from the literature. Its success shows that the SRT task can be understood in terms of retrieval from memory without implying implicit learning.
People behave as if they know the structure of their environment. Because people rarely study that structure explicitly, several theorists have postulated an implicit learning system that abstracts that structure automatically. An alternative view is that people respond to local structure that derives from global structure. Measures are developed that quantify structure in a set of stimuli, in individual stimuli, and in encoded stimuli. The authors apply the measures to examine serial recall for sequences of colors generated using a stationary Markov grammar. They demonstrate that the 3 kinds of redundancy are confounded and show that the memorial advantage for grammatical stimuli reflects participants' use of local expressions of grammatical structure to aid learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.