Previous research demonstrated that associating a stimulus with value (e.g., monetary reward) can increase its salience and induce a value-driven attentional capture when it becomes a distractor in visual search. Here we investigate to what extent this value-driven attentional capture is affected by the perceptual salience of the stimulus and the type of value attached to the stimulus. We showed that a color previously associated with monetary gain or loss impaired subsequent search for a unique shape target (Experiment 1), but a shape that was previously associated with gain or loss did not affect search for a unique color target (Experiments 2A and 2B), indicating that the associative learning of value and the effect of value-driven attentional capture are modulated by the perceptual salience of a stimulus. The value-based attentional capture recurred when the shape distractor was paired more strongly with monetary loss (Experiment 2C) or was paired with pain stimulation (Experiment 3), indicating that when the value is significant enough to an organism, it can render a perceptually less salient stimulus capable of capturing attention in visual search. These results suggest that value interacts with perceptual salience to modulate the value-based attentional capture, and the extent of value information capturing attention depends on the biological significance of the value attribute.
It is well known that directing attention to a location in space enhances the processing efficiency of stimuli presented at that location. Research has also shown that around this area of enhanced processing, there is an inhibitory region within which processing of information is suppressed. In this study, we investigated whether a reward-associated stimulus can break through the inhibitory surround. A distractor that was previously associated with high or low reward was presented near the target with a variable distance between them. For low-reward distractors, only the distractor very close to the target caused interference to target processing; for high-reward distractors, both near and relatively far distractors caused interference, demonstrating that task-irrelevant reward-associated stimuli can capture attention even when presented within the inhibitory surround.
Focusing attention on a target creates a center-surround inhibition such that distractors located close to the target do not capture attention. Recent research showed that a distractor can break through this surround inhibition when associated with reward. However, the brain basis for this reward-based attention is unclear. In this fMRI study, we presented a distractor associated with high or low reward at different distances from the target. Behaviorally the low-reward distractor did not capture attention and thus did not cause interference, whereas the high-reward distractor captured attention only when located near the target. Neural activity in extrastriate cortex mirrored the behavioral pattern. A comparison between the high-reward and the low-reward distractors presented near the target (i.e., reward-based attention) and a comparison between the high-reward distractors located near and far from the target (i.e., spatial attention) revealed a common frontoparietal network, including inferior frontal gyrus and inferior parietal sulcus as well as the visual cortex. Reward-based attention specifically activated the anterior insula (AI). Dynamic causal modelling showed that reward modulated the connectivity from AI to the frontoparietal network but not the connectivity from the frontoparietal network to the visual cortex. Across participants, the reward-based attentional effect could be predicted both by the activity in AI and by the changes of spontaneous functional connectivity between AI and ventral striatum before and after reward association. These results suggest that AI encodes reward-based salience and projects it to the stimulus-driven attentional network, which enables the reward-associated distractor to break through the surround inhibition in the visual cortex.
Reward-predictive stimuli can increase an automatic response tendency, which needs to be counteracted by effortful response inhibition when this tendency is inappropriate for the current task. Here we investigated how the human brain implements this dynamic process by adopting a reward-modulated Simon task while acquiring EEG and fMRI data in separate sessions. In the Simon task, a lateral target stimulus triggers an automatic response tendency of the spatially corresponding hand, which needs to be overcome if the activated hand is opposite to what the task requires, thereby delaying the response. We associated high or low reward with different targets, the location of which could be congruent or incongruent with the correct response hand. High-reward targets elicited larger Simon effects than low-reward targets, suggesting an increase in the automatic response tendency induced by the stimulus location. This tendency was accompanied by modulations of the lateralized readiness potential over the motor cortex, and was inhibited soon after if the high-reward targets were incongruent with the correct response hand. Moreover, this process was accompanied by enhanced theta oscillations in medial frontal cortex and enhanced activity in a frontobasal ganglia network. With dynamical causal modeling, we further demonstrated that the connection from presupplementary motor area (pre-SMA) to right inferior frontal cortex (rIFC) played a crucial role in modulating the reward-modulated response inhibition. Our results support a dynamic neural model of reward-induced response activation and inhibition, and shed light on the neural communication between reward and cognitive control in generating adaptive behaviors.
Although it has been well documented that the spatial inhibitory effect induced by repetition of location (i.e., spatial inhibition of return, or IOR) occurs cross-modally, we do not yet know whether nonspatial (e.g., color-based) repetition-induced inhibition occurs in a cross-modal fashion as well. In the present study, a novel cross-modal paradigm with regard to color-based repetition was adopted. An intervening neutral cue, whose semantic identity was different from those of both the prime and the target, was introduced between the prime and the target in a repetition-priming task. The modalities of the prime, the neutral cue, and the target could be either visual or auditory, and the prime and the target could refer either to the same or to different semantic identities. By adopting this paradigm, we aimed to answer two questions: (1) What are the specific conditions under which cross-modal semantic-based repetition inhibition occurs? (2) Are the representations inhibited in the semantic-based repetition inhibition effect supramodal or modality-specific? Our results suggested that semantic-based repetition inhibition occurs only when the prime and the neutral cue are from the same sensory modality, and it occurs irrespective of whether the modality of the target is cued and irrespective of whether the modality of the target is auditory or visual. Taken together, our results suggest that the occurrence of cross-modal nonspatial repetition inhibition is conditional and that the nonspatial representations inhibited by the repetition inhibition are supramodal.
Training convolutional neural networks (CNNs) for segmentation of pulmonary airway, artery, and vein is challenging due to sparse supervisory signals caused by the severe class imbalance between tubular targets and background. We present a CNNsbased method for accurate airway and artery-vein segmentation in non-contrast computed tomography. It enjoys superior sensitivity to tenuous peripheral bronchioles, arterioles, and venules. The method first uses a feature recalibration module to make the best use of features learned from the neural networks. Spatial information of features is properly integrated to retain relative priority of activated regions, which benefits the subsequent channel-wise recalibration. Then, attention distillation module is introduced to reinforce representation learning of tubular objects. Fine-grained details in high-resolution attention maps are passing down from one layer to its previous layer recursively to enrich context. Anatomy prior of lung context map and distance transform map is designed and incorporated for better artery-vein differentiation capacity. Extensive experiments demonstrated considerable performance gains brought by these components. Compared with state-of-the-art methods, our method extracted much more branches while maintaining competitive overall segmentation performance. Codes and models will be available later at http://www.pami.sjtu.edu.cn.
Previous studies have shown that reward can enhance cognitive control and reduce conflict in visual processing. Here we investigate (a) whether and how reward influences cross-modal conflict control and (b) how the shift of attention across modalities modulates the effect of reward on cross-modal conflict control. In four experiments, a cue indicating the reward availability of a given trial (reward vs. no reward) was presented prior to a target. The target was either a visual or an auditory letter, which was accompanied by a distracting letter from the other modality. The identity of the distracting letter was either the same as or different from the identity of the target letter (congruent vs. incongruent). When the cue modality was constant (Experiment 1) or changed across different experimental blocks (Experiment 3), the interference effect (i.e., the response time difference between incongruent and congruent trials) was smaller following a reward cue than a no-reward cue, suggesting that reward can reduce cross-modal conflict. In contrast, when the cue modality was changed trial-by-trial in an unpredictable way (Experiments 2 and 4), reward reduced cross-modal conflict only when the cue and the target were from different modalities and had a long stimulus onset asynchrony (SOA) between them but not when they shared the same modality or had a short SOA between them. These results suggest that reward can facilitate cross-modal conflict resolution, and this effect may critically depend on both the preparatory state between the cue and the target and timing to initiate cognitive control.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.