The study presented here investigated how environmental sounds influence picture naming. In a series of four experiments participants named pictures (e.g., the picture of a horse) while hearing task-irrelevant sounds (e.g., neighing, barking, or drumming). Experiments 1 and 2 established two findings, facilitation from congruent sounds (e.g., picture: horse, sound: neighing) and interference from semantically related sounds (e.g., sound: barking), both relative to unrelated sounds (e.g., sound: drumming). Experiment 3 replicated the effects in a situation in which participants were not familiarized with the sounds prior to the experiment. Experiment 4 replicated the congruency facilitation effect, but showed that semantic interference was not obtained with distractor sounds which were not associated with target pictures (i.e., were not part of the response set). The general pattern of facilitation from congruent sound distractors and interference from semantically related sound distractors resembles the pattern commonly observed with distractor words. This parallelism suggests that the underlying processes are not specific to either distractor words or distractor sounds but instead reflect general aspects of semantic-lexical selection in language production. The results indicate that language production theories need to include a competitive selection mechanism at either the lexical processing stage, or the prelexical processing stage, or both. (PsycINFO Database Record
BackgroundPremenstrual syndrome (PMS) is characterized by a cluster of psychological and somatic symptoms during the late luteal phase of the menstrual cycle that disappear after the onset of menses. Behavioral differences in emotional and cognitive processing have been reported in women with PMS, and it is of particular interest whether PMS affects the parallel execution of emotional and cognitive processing. Related to this is the question of how the performance of women with PMS relates to stress levels compared to women without PMS. Cortisol has been shown to affect emotional processing in general and it has also been shown that women with severe PMS have a particular cortisol profile.MethodsWe measured performance in an emotional conflict task and stress levels in women with PMS (n = 15) and women without PMS (n = 15) throughout their menstrual cycle.ResultsWe found a significant increase (p = 0.001) in the mean reaction time for resolving emotional conflict from the follicular to the luteal cycle phase in all subjects. Only women with PMS demonstrated an increase in physiological and subjective stress measures during the luteal menstrual cycle phase.ConclusionsOur findings suggest that the menstrual cycle modulates the integration of emotional and cognitive processing in all women. Preliminary data are supportive of the secondary hypothesis that stress levels are mediated by the menstrual cycle phase only in women with PMS. The presented evidence for menstrual cycle-specific differences in integrating emotional and cognitive information highlights the importance of controlling for menstrual cycle phase in studies that aim to elucidate the interplay of emotion and cognition.
Having the means to share research data openly is essential to modern science. For human research, a key aspect in this endeavor is obtaining consent from participants, not just to take part in a study, which is a basic ethical principle, but also to share their data with the scientific community. To ensure that the participants' privacy is respected, national and/or supranational regulations and laws are in place. It is, however, not always clear to researchers what the implications of those are, nor how to comply with them. The Open Brain Consent (https://open-brain-consent.readthedocs.io/en/stable/) is an international initiative that aims to provide researchers in the brain imaging community with information about data sharing options and tools. We present here a short history of this project and its latest developments, and share pointers to consent forms, including a template consent form that is compliant with the EU general data protection regulation. We also share pointers to an associated data user agreement that is not only useful in the EU context, but also for any researchers dealing with personal (clinical) data elsewhere.
In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.
The identification of animal behavior in video is a critical but time-consuming task in many areas of research. Here, we introduce DeepAction, a deep learning-based toolbox for automatically annotating animal behavior in video. Our approach uses features extracted from raw video frames by a pretrained convolutional neural network to train a recurrent neural network classifier. We evaluate the classifier on two benchmark rodent datasets and one octopus dataset. We show that it achieves high accuracy, requires little training data, and surpasses both human agreement and most comparable existing methods. We also create a confidence score for classifier output, and show that our method provides an accurate estimate of classifier performance and reduces the time required by human annotators to review and correct automatically-produced annotations. We release our system and accompanying annotation interface as an open-source MATLAB toolbox.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.