It is controversially discussed whether or not mood-congruent recall (i.e., superior recall for mood-congruent material) reflects memory encoding processes or reduces to processes during retrieval. We therefore investigated the neurophysiological correlates of mood-dependent memory during emotional word encoding. Event-related potentials (ERPs) were recorded while participants in good or bad mood states encoded words of positive and negative valence. Words were either complete or had to be generated from fragments. Participants had to memorize words for subsequent recall. Mood-congruent recall tended to be largest in good mood for generated words. Starting at 200 ms, mood-congruent ERP effects of word valence were obtained in good, but not in bad mood. Only for good mood, source analysis revealed valence-related activity in ventral temporal cortex and for generated words also in prefrontal cortex. These areas are known to be involved in semantic processing. Our findings are consistent with the view that mood-congruent recall depends on the activation of mood-congruent semantic knowledge during encoding. Incoming stimuli are more readily transformed according to stored knowledge structures in good mood particularly during generative encoding tasks. The present results therefore show that mood-congruent memory originates already during encoding and cannot be reduced to strategic processes during retrieval.
We show that simple perceptual competences can emerge from an internal simulation of action effects and are thus grounded in behavior. A simulated agent learns to distinguish between dead ends and corridors without the necessity to represent these concepts in the sensory domain. Initially, the agent is only endowed with a simple value system and the means to extract low-level features from an image. In the interaction with the environment, it acquires a visuo-tactile forward model that allows the agent to predict how the visual input is changing under its movements, and whether movements will lead to a collision. From short-term predictions based on the forward model, the agent learns an inverse model. The inverse model in turn produces suggestions about which actions should be simulated in long-term predictions, and long-term predictions eventually give rise to the perceptual ability.
For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target's position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and the motor data. Using this density, a mapping is achieved by completing a partially given sensorimotor pattern. The controller can cope with the ambiguity in having a set of redundant arm postures for a given target. The combined model of saccade and arm controller was able to fixate and grasp an elongated object with arbitrary orientation and at arbitrary position on a table in 94% of trials.
Previous research on spontaneous trait inferences (STI) was based on verbal stimuli. In this research, stimulus behaviorswere presented pictorially and STIs were measured in terms of the time needed to identify a trait term that gradually appeared behind a mask. An attempt was made to demonstrate that the STI cannot be reduced to a side effect of language comprehension. A pilot study showed that the phenomenon extends to pictures and that a graphical encoding task leads to even stronger STIs than verbal recoding. Experiment 1 corroborated the basic finding using an improved methodology. In Experiment 2, specific encoding operations were manipulated in a verification task. STIs were strongest when the verification task referred to concrete stimulus aspects. The findings support neither an account in terms of mere language comprehension nor a verbal interference of inferential-distance account, but they are consistent with a concreteness advantage or picture-superiority effect.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.