In 7 free-recall experiments, the benefit of creating drawings of to-be-remembered information relative to writing was examined as a mnemonic strategy. In Experiments 1 and 2, participants were presented with a list of words and were asked to either draw or write out each. Drawn words were better recalled than written. Experiments 3-5 showed that the memory boost provided by drawing could not be explained by elaborative encoding (deep level of processing, LoP), visual imagery, or picture superiority, respectively. In Experiment 6, we explored potential limitations of the drawing effect, by reducing encoding time and increasing list length. Drawing, relative to writing, still benefited memory despite these constraints. In Experiment 7, the drawing effect was significant even when encoding trial types were compared in pure lists between participants, inconsistent with a distinctiveness account. Together these experiments indicate that drawing enhances memory relative to writing, across settings, instructions, and alternate encoding strategies, both within- and between-participants, and that a deep LoP, visual imagery, or picture superiority, alone or collectively, are not sufficient to explain the observed effect. We propose that drawing improves memory by encouraging a seamless integration of semantic, visual, and motor aspects of a memory trace.
The colloquialism “a picture is worth a thousand words” has reverberated through the decades, yet there is very little basic cognitive research assessing the merit of drawing as a mnemonic strategy. In our recent research, we explored whether drawing to-be-learned information enhanced memory and found it to be a reliable, replicable means of boosting performance. Specifically, we have shown this technique can be applied to enhance learning of individual words and pictures as well as textbook definitions. In delineating the mechanism of action, we have shown that gains are greater from drawing than other known mnemonic techniques, such as semantic elaboration, visualization, writing, and even tracing to-be-remembered information. We propose that drawing improves memory by promoting the integration of elaborative, pictorial, and motor codes, facilitating creation of a context-rich representation. Importantly, the simplicity of this strategy means it can be used by people with cognitive impairments to enhance memory, with preliminary findings suggesting measurable gains in performance in both normally aging individuals and patients with dementia.
Drawing a picture of to-be-remembered information substantially boosts memory performance in free-recall tasks. In the current work, we sought to test the notion that drawing confers its benefit to memory performance by creating a detailed recollection of the encoding context. In Experiments 1 and 2, we demonstrated that for both pictures and words, items that were drawn by the participant at encoding were better recognized in a later test than were words that were written out. Moreover, participants' source memory (in this experiment, correct identification of whether the word was drawn or written) was superior for items drawn relative to written at encoding. In Experiments 3A and 3B, we used a remember-know paradigm to demonstrate again that drawn words were better recognized than written words, and further showed that this effect was driven by a greater proportion of recollection-, rather than familiarity-based responses. Lastly, in Experiment 4 we implemented a response deadline procedure, and showed that when recognition responses were speeded, thereby reducing participants' capacity for recollection, the benefit of drawing was substantially smaller. Taken together, our findings converge on the idea that drawing improves memory as a result of providing vivid contextual information which can be later called upon to aid retrieval. (PsycINFO Database Record
We investigated age differences in memory for spatial routes that were either actively or passively encoded. A series of virtual environments were created and presented to 20 younger (Mean age = 19.71) and 20 older (Mean age = 74.55) adults, through a cardboard viewer. During encoding, participants explored routes presented within city, park, and mall virtual environments, and were later asked to re-trace their travelled routes. Critically, participants encoded half the virtual environments by passively viewing a guided tour along a pre-selected route, and half through active exploration with volitional control of their movements by using a button press on the viewer. During retrieval, participants were placed in the same starting location and asked to retrace the previously traveled route. We calculated the percentage overlap in the paths travelled at encoding and retrieval, as an indicator of spatial memory accuracy, and examined various measures indexing individual differences in their cognitive approach and visuo-spatial processing abilities. Results showed that active navigation, compared to passive viewing during encoding, resulted in a higher accuracy in spatial memory, with the magnitude of this memory enhancement being significantly larger in older than in younger adults. Regression analyses showed that age and score on the Hooper Visual Organizational test predicted spatial memory accuracy, following the passive and active encoding of routes. The model predicting accuracy following active encoding additionally included the distance of stops from an intersection as a significant predictor, illuminating a cognitive approach that specifically contributes to memory benefits in following active navigation. Results suggest that age-related deficits in spatial memory can be reduced by active encoding.
Although some studies have shown that haptic and visual identification seem to rely on similar processes, few studies have directly compared the two. We investigated haptic and visual object identification by asking participants to learn to recognize (Experiments 1, and 3), or to match (Experiment 2) novel objects that varied only in shape. Participants explored objects haptically, visually, or bimodally, and were then asked to identify objects haptically and/or visually. We demonstrated that patterns of identification errors were similar across identification modality, independently of learning and testing condition, suggesting that the haptic and visual representations in memory were similar. We also demonstrated that identification performance depended on both learning and testing conditions: visual identification surpassed haptic identification only when participants explored the objects visually or bimodally. When participants explored the objects haptically, haptic and visual identification were equivalent. Interestingly, when participants were simultaneously presented with two objects (one was presented haptically, and one was presented visually), object similarity only influenced performance when participants were asked to indicate whether the two objects were the same, or when participants had learned about the objects visually-without any haptic input. The results suggest that haptic and visual object representations rely on similar processes, that they may be shared, and that visual processing may not always lead to the best performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.