Despite being a fundamental dimension of experience, how the human brain generates the perception of time remains unknown. Here, we provide a novel explanation for how human time perception might be accomplished, based on non-temporal perceptual classification processes. To demonstrate this proposal, we build an artificial neural system centred on a feed-forward image classification network, functionally similar to human visual processing. In this system, input videos of natural scenes drive changes in network activation, and accumulation of salient changes in activation are used to estimate duration. Estimates produced by this system match human reports made about the same videos, replicating key qualitative biases, including differentiating between scenes of walking around a busy city or sitting in a cafe or office. Our approach provides a working model of duration perception from stimulus to estimation and presents a new direction for examining the foundations of this central aspect of human experience.
The recent history of perceptual experience has been shown to influence subsequent perception. Classically, this dependence on perceptual history has been examined in sensory-adaptation paradigms, wherein prolonged exposure to a particular stimulus (e.g., a vertically oriented grating) produces changes in perception of subsequently presented stimuli (e.g., the tilt aftereffect). More recently, several studies have investigated the influence of shorter perceptual exposure with effects, referred to as serial dependence, being described for a variety of low- and high-level perceptual dimensions. In this study, we examined serial dependence in the processing of dispersion statistics, namely variance—a key descriptor of the environment and indicative of the precision and reliability of ensemble representations. We found two opposite serial dependences operating at different timescales, and likely originating at different processing levels: A positive, Bayesian-like bias was driven by the most recent exposures, dependent on feature-specific decision making and appearing only when high confidence was placed in that decision; and a longer lasting negative bias—akin to an adaptation aftereffect—becoming manifest as the positive bias declined. Both effects were independent of spatial presentation location and the similarity of other close traits, such as mean direction of the visual variance stimulus. These findings suggest that visual variance processing occurs in high-level areas but is also subject to a combination of multilevel mechanisms balancing perceptual stability and sensitivity, as with many different perceptual dimensions.
The experience of authorship over one’s actions and their consequences—sense of agency—is a fundamental aspect of conscious experience. In recent years, it has become common to use intentional binding as an implicit measure of the sense of agency. However, it remains contentious whether reported intentional-binding effects indicate the role of intention-related information in perception or merely represent a strong case of multisensory causal binding. Here, we used a novel virtual-reality setup to demonstrate identical magnitude-binding effects in both the presence and complete absence of intentional action, when perceptual stimuli were matched for temporal and spatial information. Our results demonstrate that intentional-binding-like effects are most simply accounted for by multisensory causal binding without necessarily being related to intention or agency. Future studies that relate binding effects to agency must provide evidence for effects beyond that expected for multisensory causal binding by itself.
Altered states of consciousness, such as psychotic or pharmacologically-induced hallucinations, provide a unique opportunity to examine the mechanisms underlying conscious perception. However, the phenomenological properties of these states are difficult to isolate experimentally from other, more general physiological and cognitive effects of psychoactive substances or psychopathological conditions. Thus, simulating phenomenological aspects of altered states in the absence of these other more general effects provides an important experimental tool for consciousness science and psychiatry. Here we describe such a tool, which we call the Hallucination Machine. It comprises a novel combination of two powerful technologies: deep convolutional neural networks (DCNNs) and panoramic videos of natural scenes, viewed immersively through a head-mounted display (panoramic VR). By doing this, we are able to simulate visual hallucinatory experiences in a biologically plausible and ecologically valid way. Two experiments illustrate potential applications of the Hallucination Machine. First, we show that the system induces visual phenomenology qualitatively similar to classical psychedelics. In a second experiment, we find that simulated hallucinations do not evoke the temporal distortion commonly associated with altered states. Overall, the Hallucination Machine offers a valuable new technique for simulating altered phenomenology without directly altering the underlying neurophysiology.
Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action-and effect-related signals.Here we show that human observers link their action-and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action-effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action -effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.