The temporal perception of simple auditory and visual stimuli can be modulated by exposure to asynchronous audiovisual speech. For instance, research using the temporal order judgment (TOJ) task has shown that exposure to temporally misaligned audiovisual speech signals can induce temporal adaptation that will influence the TOJs of other (simpler) audiovisual events (Navarra et al. (2005) Cognit Brain Res 25:499-507). Given that TOJ and simultaneity judgment (SJ) tasks appear to reflect different underlying mechanisms, we investigated whether adaptation to asynchronous speech inputs would also influence SJ task performance. Participants judged whether a light flash and a noise burst, presented at varying stimulus onset asynchronies, were simultaneous or not, or else they discriminated which of the two sensory events appeared to have occurred first. While performing these tasks, participants monitored a continuous speech stream for target words that were either presented in synchrony, or with the audio channel lagging 300 ms behind the video channel. We found that the sensitivity of participant's TOJ and SJ responses was reduced when the background speech stream was desynchronized. A significant modulation of the point of subjective simultaneity (PSS) was also observed in the SJ task but, interestingly, not in the TOJ task, thus supporting previous claims that TOJ and SJ tasks may tap somewhat different aspects of temporal perception.
A novel process was designed for the large-scale isolation of bryostatin 1 from the bryozoan Bugula neritina L. in order to obtain multigram quantities of highly pure material for formulation studies, preclinical toxicology, and clinical trials in cancer patients. Multigram quantities of bryostatin 1 were obtained from a collection of approximately 10,000 gallons of wet animal. A phorbol dibutyrate (PDBu) receptor binding assay and hplc with photodiode array detection were used for the design, validation, and control of the isolation process.
Whenever two or more sensory inputs are highly consistent in one or more dimension(s), observers will be more likely to perceive them as a single multisensory event rather than as separate unimodal events. For audiovisual speech, but not for other noncommunicative events, participants exhibit a "unity effect," whereby they are less sensitive to temporal asynchrony (i.e., that are more likely to bind the multisensory signals together) for matched (than for mismatched) speech events. This finding suggests that the modulation of multisensory integration by the unity effect in humans may be specific to speech. To test this hypothesis directly, we investigated whether the unity effect would also influence the multisensory integration of vocalizations from another primate species, the rhesus monkey. Human participants made temporal order judgments for both matched and mismatched audiovisual stimuli presented at a range of stimulus-onset asynchronies. The unity effect was examined with (1) a single call-type across two different monkeys, (2) two different call-types from the same monkey, (3) human versus monkey "cooing," and (4) speech sounds produced by a male and a female human. The results show that the unity effect only influenced participants' performance for the speech stimuli; no effect was observed for monkey vocalizations or for the human imitations of monkey calls. These findings suggest that the facilitation of multisensory integration by the unity effect is specific to human speech signals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.