In daily life, signals from the different senses are often integrated to enhance multisensory perception. However, an important, yet currently still controversial, topic concerns the need for attention in this integration process. To investigate this question, we turned to the processing of multisensory distractors. Note that multisensory target processing is typically confounded with attention as people attend to the stimuli that they respond to. We therefore designed a multisensory flanker task in which the target and distractor stimuli were both multisensory and the congruency between the features (auditory and visual) was varied orthogonally. In addition, we manipulated whether distractor or target was within the focus of participants’ gaze (i.e., was overtly attended). Importantly, distractor congruency effects were modulated by this manipulation. Fixating the distractor led to crossmodal congruency effects between the visual and auditory feature dimensions (e.g., a visually incongruent distractor interfered more if it was also auditorily incongruent with the target), while congruency effects were independent of each other when the distractor was not fixated (i.e., visual interference was not modulated by auditory interference in this case). These results suggest that distractors outside the focus of overt attention are processed at the level of features whereas those distractors presented at fixation are processed as a configuration of features. Taken together, these results can be taken to suggest that the multisensory integration of irrelevant stimuli depends on the focus of spatial attention.
To respond to multisensory information, inputs from different sensory modalities must be processed and combined. Recently, overt spatial attention was shown to be a crucial factor modulating the processing of irrelevant audiovisual multisensory stimuli. Here, we investigate the processing of task-irrelevant visuotactile features in a multisensory flanker interference task incorporating visuotactile target and distractor stimuli. The congruency between the target and distractor features was varied orthogonally. Across three experiments, overt spatial attention and the spatial separation between the distractor and the target were varied systematically. When fixating the distractor, the processing of the visual and tactile distractor features was not independent. Manipulating overt spatial attention as well as the spatial separation between the target and distractor impacted multisensory distractor processing. These results are consistent with those approaches emphasizing the role of attention in multisensory processing specifically in relation to the cognitive load or selection difficulty of the task situation. Public Significance StatementThis study provides insight into the modulation of multisensory distractor processing by spatial attention. To elicit multisensory processing, the visual and tactile feature of the target as well as the distractor were presented in close spatiotemporal proximity. Our results clearly demonstrate that irrelevant multisensory stimuli are combined and not processed independently if overt spatial attentional resources are directed toward them.
BackgroundManual muscle testing (MMT) is a non-invasive assessment tool used by a variety of health care providers to evaluate neuromusculoskeletal integrity, and muscular strength in particular. In one form of MMT called muscle response testing (MRT), muscles are said to be tested, not to evaluate muscular strength, but neural control. One established, but insufficiently validated, application of MRT is to assess a patient’s response to semantic stimuli (e.g. spoken lies) during a therapy session. Our primary aim was to estimate the accuracy of MRT to distinguish false from true spoken statements, in randomised and blinded experiments. A secondary aim was to compare MRT accuracy to the accuracy when practitioners used only their intuition to differentiate false from true spoken statements.MethodsTwo prospective studies of diagnostic test accuracy using MRT to detect lies are presented. A true positive MRT test was one that resulted in a subjective weakening of the muscle following a lie, and a true negative was one that did not result in a subjective weakening of the muscle following a truth. Experiment 2 replicated Experiment 1 using a simplified methodology. In Experiment 1, 48 practitioners were paired with 48 MRT-naïve test patients, forming unique practitioner-test patient pairs. Practitioners were enrolled with any amount of MRT experience. In Experiment 2, 20 unique pairs were enrolled, with test patients being a mix of MRT-naïve and not-MRT-naïve. The primary index test was MRT. A secondary index test was also enacted in which the practitioners made intuitive guesses (“intuition”), without using MRT. The actual verity of the spoken statement was compared to the outcome of both index tests (MRT and Intuition) and their mean overall fractions correct were calculated and reported as mean accuracies.ResultsIn Experiment 1, MRT accuracy, 0.659 (95% CI 0.623 - 0.695), was found to be significantly different (p < 0.01) from intuition accuracy, 0.474 (95% CI 0.449 - 0.500), and also from the likelihood of chance (0.500; p < 0.01). Experiment 2 replicated the findings of Experiment 1. Testing for various factors that may have influenced MRT accuracy failed to detect any correlations.ConclusionsMRT has repeatedly demonstrated significant accuracy for distinguishing lies from truths, compared to both intuition and chance. The primary limitation of this study is its lack of generalisability to other applications of MRT and to MMT.Study registrationThe Australian New Zealand Clinical Trials Registry (ANZCTR; www.anzctr.org.au; ID # ACTRN12609000455268, and US-based ClinicalTrials.gov (ID # NCT01066312).Electronic supplementary materialThe online version of this article (doi:10.1186/s12906-016-1416-2) contains supplementary material, which is available to authorized users.
When repeatedly exposed to simultaneously presented stimuli, associations between these stimuli are nearly always established, both within as well as between sensory modalities. Such associations guide our subsequent actions and may also play a role in multisensory selection. Thus, crossmodal associations (i.e., associations between stimuli from different modalities) learned in a multisensory interference task might affect subsequent information processing. The aim of this study was to investigate the processing level of multisensory stimuli in multisensory selection by means of crossmodal aftereffects. Either feature or response associations were induced in a multisensory flanker task while the amount of interference in a subsequent crossmodal flanker task was measured. The results of Experiment 1 revealed the existence of crossmodal interference after multisensory selection. Experiments 2 and 3 then went on to demonstrate the dependence of this effect on the perceptual associations between features themselves, rather than on the associations between feature and response. Establishing response associations did not lead to a subsequent crossmodal interference effect (Experiment 2), while stimulus feature associations without response associations (obtained by changing the response effectors) did (Experiment 3). Taken together, this pattern of results suggests that associations in multisensory selection, and the interference of (crossmodal) distractors, predominantly work at the perceptual, rather than at the response, level.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.