Recall of visual features from working memory varies in both bias and precision depending on stimulus parameters. Whereas a number of models can approximate the average distribution of recall error across target stimuli, attempts to model how error varies with the choice of target have been ad hoc. Here we adapt a neural model of working memory to provide a principled account of these stimulus-specific effects, by allowing each neuron's tuning function to vary according to the principle of efficient coding, which states that neural responses should be optimized with respect to the frequency of stimuli in nature. For orientation, this means incorporating a prior that favors cardinal over oblique orientations. While continuing to capture the changes in error distribution with set size, the resulting model accurately described stimulus-specific variations as well, better than a slot-based competitor. Efficient coding produces a repulsive bias away from cardinal orientations, a bias that ought to be sensitive to changes in the environmental statistics. We subsequently tested whether shifts in the stimulus distribution influenced response bias to uniformly sampled target orientations in human subjects (of either sex). Across adaptation blocks, we manipulated the distribution of nontarget items by sampling from a bimodal congruent (incongruent) distribution with peaks centered on cardinal (oblique) orientations. Preadaptation responses were repulsed away from the cardinal axes. However, exposure to the incongruent distribution produced systematic decreases in repulsion that persisted after adaptation. This result confirms the role of prior expectation in generating stimulus-specific effects and validates the neural framework.SIGNIFICANCE STATEMENT Theories of neural coding have been used successfully to explain how errors in recall from working memory depend on the number of items stored. However, recall of visual features also shows stimulus-specific variation in bias and precision. Here we unify two previously unconnected theories, the neural resource model of working memory and the efficient coding framework, to provide a principled account of these stimulus-specific effects. Given the importance of working memory limitations to multiple aspects of human and animal behavior, and the recent high-profile advances in theories of efficient coding, our modeling framework provides a richer, yet parsimonious, description of how orientation encoding influences visual working memory performance.
BACKGROUND: Anaerobic colonic flora are necessary for the fermentation of fiber into short-chain fatty acids and constitute the bulk of fecal mass. Lack of dietary fiber in most enteral feedings, compounded by antibiotic therapy, suppresses normal colonic metabolism, resulting in diarrhea. Pectin, a water-soluble fiber, stimulates epithelial growth in the colon and thus reduces diarrhea. METHODS: Forty-four critically ill patients receiving enteral nutrition and antibiotic therapy were randomized to receive fiber-containing or fiber-free tube feedings and pectin or placebo. Data on frequency, consistency, and volume of fecal output; energy (caloric) intake; and administration of specific medications were collected for 9 days. Diarrhea was defined as 2 or more days with scores of 12 or higher on the Hart and Dobb diarrhea scale. RESULTS: Subjects in the 4 groups did not differ significantly in age, sex, severity of illness, or energy intake. Twelve subjects (27.3%) experienced diarrhea. Significantly fewer subjects in the fiber-free/placebo and fiber/pectin groups experienced diarrhea than did subjects in the fiber/placebo group (P = .02). On the basis of repeated-measures analysis of variance of daily mean scores, the severity of diarrhea did not differ significantly among the study groups over time (P = .16). CONCLUSIONS: The reduced rate of diarrhea found in this study may be related to the stringent definition of diarrhea used. The therapeutic dose of pectin for reducing diarrhea needs further exploration. The trend was toward less diarrhea in the fiber/pectin group, but the study needs to be replicated with a larger sample.
Research into human working memory limits has been shaped by the competition between different formal models, with a central point of contention being whether internal representations are continuous or discrete. Here we describe a sampling approach derived from principles of neural coding as a framework to understand working memory limits. Reconceptualizing existing models in these terms reveals strong commonalities between seemingly opposing accounts, but also allows us to identify specific points of difference. We show that the discrete versus continuous nature of sampling is not critical to model fits, but that, instead, random variability in sample counts is the key to reproducing human performance in both single- and whole-report tasks. A probabilistic limit on the number of items successfully retrieved is an emergent property of stochastic sampling, requiring no explicit mechanism to enforce it. These findings resolve discrepancies between previous accounts and establish a unified computational framework for working memory that is compatible with neural principles.
HighlightsWe fit a neural model of working memory storage to performance on retro-cue tasks.This model provided a better description of data than a prominent mixture model.Retro-cueing was associated with a higher firing rate of the encoding population.Results are consistent with protection of the cued item against temporal decay/drift.
This experiment examined single-process and dual-process accounts of the development of visual recognition memory. The participants, 6-7-year-olds, 9-10-year-olds and adults, were presented with a list of pictures which they encoded under shallow or deep conditions. They then made recognition and confidence judgments about a list containing old and new items. We replicated the main trends reported by Ghetti and Angelini () in that recognition hit rates increased from 6 to 9 years of age, with larger age changes following deep than shallow encoding. Formal versions of the dual-process high threshold signal detection model and several single-process models (equal variance signal detection, unequal variance signal detection, mixture signal detection) were fit to the developmental data. The unequal variance and mixture signal detection models gave a better account of the data than either of the other models. A state-trace analysis found evidence for only one underlying memory process across the age range tested. These results suggest that single-process memory models based on memory strength are a viable alternative to dual-process models for explaining memory development.
Research into human working memory limits has been shaped by the competition between different formal models, with a central point of contention being whether internal representations are continuous or discrete. Here we describe a sampling approach derived from principles of neural coding as a new framework to understand working memory limits. Reconceptualizing existing models in these terms reveals strong commonalities between seemingly opposing accounts, and shows that random variability in sample counts, rather than discreteness, is the key to reproducing human behavioral performance.A probabilistic limit on the number of items successfully retrieved is an emergent property of stochastic sampling, requiring no explicit mechanism to enforce it. These findings resolve discrepancies between previous accounts and establish a unified computational framework for working memory.Elementary features of objects are represented within the human visual system in the form of population codes [1]. A simple model [2] of limits on representing multiple stimuli [3-5] assumes each stimulus is encoded in a separate pool of neurons with identical tuning curves, each centered on a different (preferred) feature value, such that the cells densely and uniformly cover a one-dimensional feature space (Fig. 1A). Each neuron's response to a stimulus consists of discrete spikes generated by a Poisson process at the rate determined by its tuning function. To make a connection with sampling [6-12], we associate each spike from a given * Email for correspondence: pmb20@cam.ac.uk.
Recent experimental evidence in experiencebased decision-making suggests that people are more risk seeking in the gains domain relative to the losses domain. This critical result is at odds with the standard reflection effect observed in description-based choice and explained by Prospect Theory. The so-called reversed-reflection effect has been predicated on the extreme-outcome rule, which suggests that memory biases affect risky choice from experience. To test the general plausibility of the rule, we conducted two experiments examining how the magnitude of prospective outcomes impacts risk preferences. We found that while the reversed-reflection effect was present with small-magnitude payoffs, using payoffs of larger magnitude brought participants' behavior back in line with the standard reflection effect. Our results suggest that risk preferences in experience-based decision-making are not only affected by the relative extremeness but also by the absolute extremeness of past events.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.