It is commonly believed that visual short-term memory (VSTM) consists of a fixed number of "slots" in which items can be stored. An alternative theory in which memory resource is a continuous quantity distributed over all items seems to be refuted by the appearance of guessing in human responses. Here, we introduce a model in which resource is not only continuous but also variable across items and trials, causing random fluctuations in encoding precision. We tested this model against previous models using two VSTM paradigms and two feature dimensions. Our model accurately accounts for all aspects of the data, including apparent guessing, and outperforms slot models in formal model comparison. At the neural level, variability in precision might correspond to variability in neural population gain and doubly stochastic stimulus representation. Our results suggest that VSTM resource is continuous and variable rather than discrete and fixed and might explain why subjective experience of VSTM is not all or none.T homas Chamberlin famously warned scientists against entertaining only a single hypothesis, for such a modus operandi might lead to undue attachment and "a pressing of the facts to make them fit the theory" (ref. 1, p. 840). For half a century, the study of short-term memory limitations has been dominated by a single hypothesis, namely that a fixed number of items can be held in memory and any excess items are discarded (2-5). The alternative notion that short-term memory resource is a continuous quantity distributed over all items, with a lower amount per item translating into lower encoding precision, has enjoyed some success (6-8), but has been unable to account for the finding that humans often seem to make a random guess when asked to report the identity of one of a set of remembered items, especially when many items are present (9). Specifically, if resource were evenly distributed across items (6, 10), observers would never guess. Thus, at present, no viable continuous-resource model exists.Here, we propose a more sophisticated continuous-resource model, the variable-precision (VP) model, in which the amount of resource an item receives, and thus its encoding precision, varies randomly across items and trials and on average decreases with set size. Resource might correspond to the gain of a neural population pattern of activity encoding a memorized feature. When gain is higher, a stimulus is encoded with higher precision (11,12). Variability in gain across items and trials is consistent with observations of single-neuron firing rate variability (13-15) and attentional fluctuations (16, 17).We tested the VP model against three alternative models (Fig. 1). According to the classic item-limit (IL) model (4), a fixed number of items is kept in memory, and memorized items are recalled perfectly. In the equal-precision (EP) model (6, 10), a continuous resource is evenly distributed across all items. The slots-plus-averaging (SA) model (9) acknowledges the presence of noise but combines it with the notion of ...
We measured the precision with which an irrelevant feature of a relevant object is stored in visual short-term memory. In each experiment, 600 online subjects each completed 30 trials in which the same feature (orientation or color) was relevant, followed by a single surprise trial in which the other feature was relevant. Pooling data across all subjects, we find in a delayed-estimation task but not in a change localization task that the irrelevant feature is retrieved, but with much lower precision than when the same feature is relevant: The irrelevant/relevant precision ratio was 3.8% for orientation and 20.4% for color.
A central question in the study of visual short-term memory (VSTM) has been whether its basic units are objects or features. Most studies addressing this question have used change detection tasks in which the feature value before the change is highly discriminable from the feature value after the change. This approach assumes that memory noise is negligible, which recent work has shown not to be the case. Here, we investigate VSTM for orientation and color within a noisy-memory framework, using change localization with a variable magnitude of change. A specific consequence of the noise is that it is necessary to model the inference (decision) stage. We find that (a) orientation and color have independent pools of memory resource (consistent with classic results); (b) an irrelevant feature dimension is either encoded but ignored during decision-making, or encoded with low precision and taken into account during decision-making; and (c) total resource available in a given feature dimension is lower in the presence of task-relevant stimuli that are neutral in that feature dimension. We propose a framework in which feature resource comes both in packaged and in targeted form.
We used a delayed-estimation paradigm to characterize the joint effects of set size (one, two, four, or six) and delay duration (1, 2, 3, or 6 s) on visual working memory for orientation. We conducted two experiments: one with delay durations blocked, another with delay durations interleaved. As dependent variables, we examined four model-free metrics of dispersion as well as precision estimates in four simple models. We tested for effects of delay time using analyses of variance, linear regressions, and nested model comparisons. We found significant effects of set size and delay duration on both model-free and model-based measures of dispersion. However, the effect of delay duration was much weaker than that of set size, dependent on the analysis method, and apparent in only a minority of subjects. The highest forgetting slope found in either experiment at any set size was a modest 1.14°/s. As secondary results, we found a low rate of nontarget reports, and significant estimation biases towards oblique orientations (but no dependence of their magnitude on either set size or delay duration). Relative stability of working memory even at higher set sizes is consistent with earlier results for motion direction and spatial frequency. We compare with a recent study that performed a very similar experiment.
When designing microprocessors, engineers must verify whether the proposed design, defined in hardware description language, does what is intended. During this verification process, engineers run simulation tests and can fix bugs if the tests have failed. Due to the complexity of the design, the baseline approach is to provide random stimuli to verify random parts of the design. However, this method is time-consuming and redundant especially when the design becomes mature and thus failure rate is low. To increase efficiency and detect failures faster, it is possible to train machine learning models by using previously run tests, and assess the likelihood of failure of new test candidates before running them. This way, instead of running random tests agnostically, engineers use the model prediction on a new set of test candidates and run a subset of them (i.e., "filtering" the tests) that are more likely to fail. Due to the severe imbalance (1% failure rate), I trained an ensemble of supervised (classification) and unsupervised (outlier detection) models and used the union of the prediction from both models to catch more failures. The tool has been deployed in an internal high performance computing (HPC) cluster early this year, as a complementary workflow which does not interfere with the existing workflow. After the deployment, I found performance instability in post-deployment performance and ran various experiments to address the issue, such as by identifying the effect of the randomness in the test generation process. In addition to introducing the relatively new data-driven approach in hardware design verification, this study also discusses the details of post-deployment evaluation such as retraining, and working around real-world constraints, which are sometimes not discussed in machine learning and data science research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.