A number of recent studies have highlighted the exceptional capacity and fidelity of visual long-term memory. For instance, Brady, Konkle, Alvarez, and Oliva (Proceedings of the National Academy of Sciences, 105, 14325-14329, 2008) presented participants with thousands of images for nearly 6 h and then tested their memory in a two-alternative forced choice (2AFC) task. Participants were 87% accurate, even when the foil was extremely similar to the target (e.g., when the same object was presented in a different state). In the present investigation, we extended these findings by including a one-week delay condition and by testing memory in a yesno as well as a 2AFC task. We replicated the exceptional memory results at a short delay. However, following a week delay, recognition accuracy was greatly reduced in both tasks, with comparable reductions in performance when the foils were both similar and dissimilar. These findings suggest that detailed and gist-like visual memories decay at similar rates, which highlights important limitations of visual long-term memory.Keywords Visual long-term memory . Exceptional memory . Recognition memory Visual long-term memory is exceptional under some conditions. For example, Shepard (1967) found that participants were 98% accurate in a two-alternative forced choice (2AFC) task after studying a series of 600 pictures. These findings were extended in a landmark study by Standing (1973); he found that after viewing up to 10,000 images over the course of several hours, people were able to choose the images they had seen in a 2AFC task with an accuracy of over 80%. Standing concluded that the capacity of visual memory for recognizing pictorial content is almost limitless. In addition, some of these early studies measured the longevity of visual long-term memory by including various study-test delay conditions, and memory performance was still excellent at the longer delays. For example, Shepard found that recognition performance was at ceiling (99.7%) following a 2-h study-test delay, and still remarkably accurate (87.0%) following a seven-day delay. Similarly, Nickerson (1968) explored visual recognition memory in a 2AFC task with delay conditions ranging from one day to one year, and concluded that visual memory retention is substantial, since the probability of correctly recognizing a seen image was approximately 90% after a week delay, and still higher than 70% after a month.These findings show that visual long-term memory has the capacity to store and retrieve a vast number of images even after long delays. However, virtually all of these studies had employed a 2AFC task in which the target was paired with an unrelated foil image. Therefore, it is difficult to determine whether the retained memory representations contained gistlike information about the basic-level object category of the studied image (e.g., BI saw an elephant rather than a chair^), or whether the memories included high-fidelity information about the perceptual details of the studied image (e.g., BI s...
Abstract:In two adaptation experiments we investigated the role of phonemes in speech perception. Participants repeatedly categorized an ambiguous test word that started with a blended /f/-/s/ fricative (?ail can be perceived as /fail/ or /sail/) or a blended /d/-/b/ stop (?ump can be perceived as /bump/ or /dump/) after exposure to a set of adaptor words. The adaptors all included unambiguous /f/ or /s/ fricatives, or alternatively, /d/ or /b/ stops. In Experiment 1 we manipulated the position of the adaptor phonemes so that they occurred at the start of the word (e.g., farm), at the start of the second syllable (e.g., tofu), or the end of the word (e.g., leaf). We found that adaptation effects occurred across positions: Participants were less likely to categorize the ambiguous test stimulus as if it contained the adapted phoneme. For example, after exposure to the adaptors leaf, golf... etc., participants were more likely to categorize the ambiguous test word ?ail as 'sail'. In Experiment 2 we also varied the voice of the speaker: Words with unambiguous final phoneme adaptors were spoken by a female while the ambiguous initial test phonemes were spoken by a male.Again robust adaptation effects occurred. Critically, in both experiments, similar adaptation effects were obtained for the fricatives and stops despite the fact that the acoustics of stops vary more as a function of position. We take these findings to support the claim that position independent phonemes play a role in spoken word identification. 3Traditional linguistic theory postulates a small set of phonemes that can be sequenced in various ways in order to represent thousands of words in a language (Chomsky & Halle, 1968;Trubetzkoy, 1969). Phonemes are the smallest linguistic unit that can distinguish word meanings and usually are of a size of a single consonant or vowel, e.g., the consonants /b/ and /p/ are phonemes in English because they differentiate the words "bark" and "park". Phonemes are critically distinguished from speech sounds (i.e. phones) in their level of abstractness. Phones are acoustically defined units that are often context-dependent, i.e. in a given language a certain phone may be bound to a specific syllable position, or require a certain stress pattern, or occur within the context of specific surrounding sounds. Although phonemes are widely assumed in linguistic theory, the psychological evidence in support of phonemes, at least in the domain of speech perception, is scant.This has given rise to various models that abandon phonemes as a functional unit in are used to activate dog and god representations, respectively. These time-bound segments can be seen as analogous to Pierrehumbert's position-specific phones (in that the segments do not abstract across position) although the input units in these models are often labeled phonemes.The common rejection of position invariant phonemes in psychological theories and models of word perception is a fundamental claim, and we explore this issue here. First we review the current empir...
Rich episodic experiences are represented in a hierarchical manner across a diverse network of brain regions, and as such, the way in which episodes are forgotten is likely to be similarly diverse. Using novel experimental approaches and statistical modelling, recent research has suggested that item-based representations, such as ones related to the colour and shape of an object, fragment over time, whereas higher-order event-based representations may be forgotten in a more ‘holistic’ uniform manner. We propose a framework that reconciles these findings, where complex episodes are represented in a hierarchical manner across different brain regions and forgetting is underpinned by different mechanisms at each level in the hierarchy.
The phenomenon of change blindness reveals that people are surprisingly poor at detecting unexpected visual changes; however, research on individual differences in detection ability is scarce. Predictive processing accounts of visual perception suggest that better change detection may be linked to assigning greater weight to prediction error signals, as indexed by an increased alternation rate in perceptual rivalry or greater sensitivity to low-level visual signals. Alternatively, superior detection ability may be associated with robust visual predictions against which sensory changes can be more effectively registered, suggesting an association with high-level mechanisms of visual short-term memory (VSTM) and attention. We administered a battery of 10 measures to explore these predictions and to determine, for the first time, the test–retest reliability of commonly used change detection measures. Change detection performance was stable over time and generalized from displays of static scenes to video clips. An exploratory factor analysis revealed two factors explaining performance across the battery, that we identify as visual stability (loading on change detection, attention measures, VSTM and perceptual rivalry) and visual ability (loading on iconic memory, temporal order judgments and contrast sensitivity). These results highlight the importance of strong, stable representations and the ability to resist distraction, in order to successfully incorporate unexpected changes into the contents of visual awareness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.