Miller (1956) summarized evidence that people can remember about seven chunks in short-term memory (STM) tasks. However, that number was meant more as a rough estimate and a rhetorical device than as a real capacity limit. Others have since suggested that there is a more precise capacity limit, but that it is only three to five chunks. The present target article brings together a wide variety of data on capacity limits suggesting that the smaller capacity limit is real. Capacity limits will be useful in analyses of information processing only if the boundary conditions for observing them can be carefully described. Four basic conditions in which chunks can be identified and capacity limits can accordingly be observed are: (1) when information overload limits chunks to individual stimulus items, (2) when other steps are taken specifically to block the recoding of stimulus items into larger chunks, (3) in performance discontinuities caused by the capacity limit, and (4) in various indirect effects of the capacity limit. Under these conditions, rehearsal and long-term memory cannot be used to combine stimulus items into chunks of an unknown size; nor can storage mechanisms that are not capacity-limited, such as sensory memory, allow the capacity-limited storage mechanism to be refilled during recall. A single, central capacity limit averaging about four chunks is implicated along with other, noncapacity-limited sources. The pure STM capacity limit expressed in chunks is distinguished from compound STM limits obtained when the number of separately held chunks is unclear. Reasons why pure capacity estimates fall within a narrow range are discussed and a capacity limit for the focus of attention is proposed.
The purpose of this review is to formulate a revised model of information processing that takes into account recent research on memory storage, selective attention, effortful versus automatic processing, and the mutual constraints that these areas place on one another. One distinctive aspect of the proposed model is the inclusion of two phases of sensory storage in each modality. The first phase extends sensation for several hundred milliseconds, whereas the second phase is a vivid recollection of sensation. The mechanism of at least the longer phase is the activation of features in long-term memory, comparable to the mechanism of non-sensory, short-term storage. Another distinctive aspect of the model is that habituation/dishabituation and central executive processes together are assumed to determine the focus of attention, without the need for either an early or a late attentional filter. Research issues that contribute to a comparison of models are discussed. Broadbent (1958) proposed a general model of the human information-processing system that was primarily designed to account for how we attend to some stimuli while ignoring others (i.e., our selective-attention capabilities) and how we retain stimulus information, in various forms, both before and after attending to it (i.e., our memory storage capabilities). Although a version of Broadbent's model still appears in almost every textbook of cognitive psychology, researchers today are ambivalent toward it; the model appears to be inconsistent with many research findings. Schneider (1987) noted that "in the 1970s there was a clear movement away" from this sort of model to "a variety of representations (e.g., levels of processing, schemata, semantic networks, and production systems)" (p. 73). Broadbent's (1958) model of processing can be termed a "pipeline" model, in which information is conveyed in a fixed serial order from one storage structure to the next: from sensory storage to short-term storage and then to long-term storage. Voluntary control of the system was represented by a selective-attention device or "filter" located after the sensory store and by information feedback loops from the high-level processing system to earlier processing stages. Recently, Broadbent (1984) summarized a number of reasons why this sort of model may be obsolete. They include (a) its characterization of the subject as a passive recipient of information, (b) massive "top-down" influences in perception in which higher-level information This research was supported by National Institutes of Health Grant 2-R23-HD21338-02 awarded to the author.I thank David Balota, Neal Kroll, Michael Posner, Scott Saults, two anonymous reviewers, and the participants in my information processing seminar for commenting on earlier drafts of the manuscript. However, they may not share all of the views expressed.
Working memory (WM) is the set of mental processes holding limited information in a temporarily accessible state in service of cognition. We provide a theoretical framework to understand the relation between WM and aptitude measures. The WM measures that have yielded high correlations with aptitudes include separate storage and processing task components, on the assumption that WM involves both storage and processing. We argue that the critical aspect of successful WM measures is that rehearsal and grouping processes are prevented, allowing a clearer estimate of how many separate chunks of information the focus of attention circumscribes at once. Storage-and-processing tasks correlate with aptitudes, according to this view, largely because the processing task prevents rehearsal and grouping of items to be recalled. In a developmental study, we document that several scope-of-attention measures that do not include a separate processing component, but nevertheless prevent efficient rehearsal or grouping, also correlate well with aptitudes and with storage-andprocessing measures. So does digit span in children too young to rehearse. Keywordsworking memory; short-term memory; individual differences; variation in working memory; cholastic abilities; intellectual abilities; attention; capacity; storage capacity Baddeley and Hitch (1974) highlighted a key theoretical construct, working memory (WM), which can be described generally as the set of mechanisms capable of retaining a small amount of information in an active state for use in ongoing cognitive tasks (though it now means Research on WM suggests that the measures used most often to examine individual differences have both strengths and weaknesses. A main type of strength is their strong correlation with intellectual aptitude tests, and a main type of weakness is the difficulty encountered in analyzing and interpreting WM test results. This difficulty stems largely from the reliance on dual tasks in the measurement of WM capacity (which include separate storage and processing task components). We will argue that the research literature provides hints that the strengths can be retained without using storage-and-processing measures. We will offer a theoretical framework for doing so, and for measuring WM in a more meaningful way than is found with current measurement practices. The theoretical framework is based on the notion of an adjustable attentional focus and on measures of the storage capacity of attention or its scope. The predictions tested in the present article pertain to the scope of attention, whereas the adjustable nature of the focus allows consistency with other highly relevant research (e.g., Kane, Bleckley, Conway, & Engle, 2001).We do not judge the success of this endeavor by whether storage-and-processing measures or the proposed alternative, scope-of-attention measures, pick up more variance in aptitude tasks.Rather, success will be judged by whether the variance that is picked up contributes to our understanding of the processes underlyi...
In the recent literature there has been considerable confusion about the three types of memory: long-term, short-term, and working memory. This chapter strives to reduce that confusion and makes up-to-date assessments of these types of memory. Long- and short-term memory could differ in two fundamental ways, with only short-term memory demonstrating (1) temporal decay and (2) chunk capacity limits. Both properties of short-term memory are still controversial but the current literature is rather encouraging regarding the existence of both decay and capacity limits. Working memory has been conceived and defined in three different, slightly discrepant ways: as short-term memory applied to cognitive tasks, as a multi-component system that holds and manipulates information in short-term memory, and as the use of attention to manage short-term memory. Regardless of the definition, there are some measures of memory in the short term that seem routine and do not correlate well with cognitive aptitudes and other measures (those usually identified with the term "working memory") that seem more attention demanding and do correlate well with these aptitudes. The evidence is evaluated and placed within a theoretical framework depicted in Fig. 1.
Wood and Cowan (1995) replicated and extended Moray's (1959) investigation of the cocktail party phenomenon, which refers to a situation in which one can attend to only part of a noisy environment, yet highly pertinent stimuli such as one's own name can suddenly capture attention. Both of these previous investigations have shown that approximately 33% of subjects report hearing their own name in an unattended, irrelevant message. Here we show that subjects who detect their name in the irrelevant message have relatively low working-memory capacities, suggesting that they have difficulty blocking out, or inhibiting, distracting information.
Visual working memory is often modeled as having a fixed number of slots. We test this model by assessing the receiver operating characteristics (ROC) of participants in a visual-working-memory change-detection task. ROC plots yielded straight lines with a slope of 1.0, a tell-tale characteristic of all-or-none mnemonic representations. Formal model assessment yielded evidence highly consistent with a discrete fixed-capacity model of working memory for this task.working memory ͉ capacity ͉ mathematical models of memory ͉ short-term memory T he study of the nature and capacity of visual working memory (WM) is both timely (1) and controversial (2, 3). A popular conceptualization is that visual WM consists of a fixed number of discrete slots in which items or chunks are temporarily held (2, 4, 5). Nonetheless, there are dissenting viewpoints in which the discreteness is taken as, at most, a convenient oversimplification (6, 7). In this article, we provide a rigorous test of the fixed-capacity model for a visual WM task. Herein, we apply this test to items that differ in color, although the test is suitable to examine the generality of capacity limits across various materials.We used a common version (8-15) of the task popularized by Luck and Vogel (4, 16) (see Fig. 1A). At study, participants are presented with an array of colored squares. At test, a single square is presented; this square is either the same color as the corresponding square in the study array (a "same trial") or a novel color (a "change trial"). Participants simply decide whether the test square is the same as or different from the corresponding studied square. In this task, where the color of each square is unique and the colors are well separated, capacity is the number of squares (objects) that may be held in visual WM. This object-based view of capacity is supported by previous research (4), in which performance does not vary with the number of manipulated features per object.Previous demonstrations of fixed capacity have relied on plotting capacity estimates as a function of the number of to-be-remembered items. Fixed capacity is claimed because capacity estimates tend to asymptote at three to four items for array sizes of four to six items. This approach, however, is not the most rigorous for this model. There are three weaknesses in previous demonstrations: (i) The asymptote of the capacity estimated may be mimicked by models without recourse to fixed capacity; (ii) previous demonstrations are made with aggregate data, and an asymptote in the group aggregate does not necessarily imply asymptotes in all or any individuals; and (iii) the stability of these asymptotes has not been formally assessed. These weaknesses motivate a more constrained test, to be presented subsequently.The Fixed-Capacity Almost-Ideal Observer Model. We define the fixed-capacity ideal observer as one who maximizes the probability of a correct response given the constraint that visual WM is discrete and limited in the number of items that may be held. Here, we derive th...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.