Manual reaction times to visual, auditory, and tactile stimuli presented simultaneously, or with a delay, were measured to test for multisensory interaction effects in a simple detection task with redundant signals. Responses to trimodal stimulus combinations were faster than those to bimodal combinations, which in turn were faster than reactions to unimodal stimuli. Response enhancement increased with decreasing auditory and tactile stimulus intensity and was a U-shaped function of stimulus onset asynchrony. Distribution inequality tests indicated that the multisensory interaction effects were larger than predicted by separate activation models, including the difference between bimodal and trimodal response facilitation. The results are discussed with respect to previous findings in a focused attention task and are compared with multisensory integration rules observed in bimodal and trimodal superior colliculus neurons in the cat and monkey.
Three hypotheses--the bound-change hypothesis, drift-rate-change hypothesis, and two-stage-processing hypothesis--are proposed to account for data from a perceptual discrimination task in which three different response deadlines were involved and three different payoffs were presented prior to each individual trial. The aim of the present research was to show (1) how the three different hypotheses incorporate response biases into a sequential sampling decision process, (2) how payoffs and deadlines affect choice probabilities, and (3) the hypotheses' predictions of response times and choice probabilities. The two-stage-processing hypothesis gave the best account, especially for the choice probabilities, whereas the drift-rate-change hypothesis had problems predicting choice probabilities as a function of deadlines.
Abstract& Saccadic reaction time to visual targets tends to be faster when stimuli from another modality (in particular, audition and touch) are presented in close temporal or spatial proximity even when subjects are instructed to ignore the accessory input (focused attention task). Multisensory interaction effects measured in neural structures involved in saccade generation (in particular, the superior colliculus) have demonstrated a similar spatio-temporal dependence. Neural network models of multisensory spatial integration have been shown to generate convergence of the visual, auditory, and tactile reference frames and the sensorimotor coordinate transformations necessary for coordinated head and eye movements. However, because these models do not capture the temporal coincidences critical for multisensory integration to occur, they cannot easily predict multisensory effects observed in behavioral data such as saccadic reaction times. This article proposes a quantitative stochastic framework, the time-window-of-integration model, to account for the temporal rules of multisensory integration. Saccadic responses collected from a visual-tactile focused attention task are shown to be consistent with the time-window-of-integration model predictions. &
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.