Our environment is richly structured, with objects producing correlated information within and across sensory modalities. A prominent challenge faced by our perceptual system is to learn such regularities. Here, we examined statistical learning and addressed learners' ability to track transitional probabilities between elements in the auditory and visual modalities. Specifically, we investigated whether cross-modal information affects statistical learning within a single modality. Participants were familiarized with a statistically structured modality (e.g., either audition or vision) accompanied by different types of cues in a second modality (e.g., vision or audition). The results revealed that statistical learning within either modality is affected by cross-modal information, with learning being enhanced or reduced according to the type of cue provided in the second modality.
Learning the structure of the environment (e.g., what usually follows what) enables animals to behave in an effective manner and prepare for future events. Unintentional learning is capable of efficiently producing such knowledge as has been demonstrated with the Artificial Grammar Learning paradigm (AGL), among others. It has been argued that selective attention is a necessary and sufficient condition for visual implicit learning. Experiment 1 shows that spatial attention is not sufficient for implicit learning. Learning does not occur if the stimuli instantiating the structure are task irrelevant. In a second experiment, we demonstrate that this holds even with abundance of available attentional resources. Together, these results challenge the current view of the relations between attention, resources, and implicit learning.
A major issue in visual scene recognition involves the extraction of recurring chunks from a sequence of complex scenes. Previous studies have suggested that this kind of learning is accomplished according to Bayesian principles that constrain the types of extracted chunks. Here we show that perceptual grouping cues are also incorporated in this Bayesian model, providing additional evidence for the possible span of chunks. Experiment 1 replicates previous results showing that observers can learn threeelement chunks without learning smaller, two-element chunks embedded within them. Experiment 2 shows that the very same embedded chunks are learned if they are grouped by perceptual cues, suggesting that perceptual grouping cues play an important role in chunk extraction from complex scenes.
Memory consists of multiple processes, from encoding information, consolidating it into short- and long- term memory, and later retrieving relevant information. Targeted memory reactivation is an experimental method during which sensory components of a multisensory representation (such as sounds or odors) are ‘reactivated’, facilitating the later retrieval of unisensory attributes. We examined whether novel and unpredicted events benefit from reactivation to a greater degree than normal stimuli. We presented participants with everyday objects, and ‘tagged’ these objects with sounds (e.g., animals and their matching sounds) at different screen locations. ‘Oddballs’ were created by presenting unusual objects and sounds (e.g., a unicorn with a heartbeat sound). During a short reactivation phase, participants listened to a replay of normal and oddball sounds. Participants were then tested on their memory for visual and spatial information in the absence of sounds. Participants were better at remembering the oddball objects compared to normal ones. Importantly, participants were also better at recalling the locations of oddball objects whose sounds were reactivated, compared to objects whose sounds that were not presented again. These results suggest that episodic memory benefits from associating objects with unusual cues, and that reactivating those cues strengthen the entire multisensory representation, resulting in enhanced memory for unisensory attributes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.