Understanding how people rate their confidence is critical for characterizing a wide range of perceptual, memory, motor, and cognitive processes. To enable the continued exploration of these processes, we created a large database of confidence studies spanning a broad set of paradigms, participant populations, and fields of study. The data from each study are structured in a common,
The processing of interoceptive signals in the insular cortex is thought to underlie self-awareness. However, the influence of interoception on visual awareness and the role of the insular cortex in this process remain unclear. Here, we show in a series of experiments that the relative timing of visual stimuli with respect to the heartbeat modulates visual awareness. We used two masking techniques and show that conscious access for visual stimuli synchronous to participants' heartbeat is suppressed compared with the same stimuli presented asynchronously to their heartbeat. Two independent brain imaging experiments using high-resolution fMRI revealed that the insular cortex was sensitive to both visible and invisible cardio-visual stimulation, showing reduced activation for visual stimuli presented synchronously to the heartbeat. Our results show that interoceptive insular processing affects visual awareness, demonstrating the role of the insula in integrating interoceptive and exteroceptive signals and in the processing of conscious signals beyond self-awareness.
Human metacognition, or the capacity to introspect on one's own mental states, has been mostly characterized through confidence reports in visual tasks. A pressing question is to what extent results from visual studies generalize to other domains. Answering this question allows determining whether metacognition operates through shared, supramodal mechanisms or through idiosyncratic, modality-specific mechanisms. Here, we report three new lines of evidence for decisional and postdecisional mechanisms arguing for the supramodality of metacognition. First, metacognitive efficiency correlated among auditory, tactile, visual, and audiovisual tasks. Second, confidence in an audiovisual task was best modeled using supramodal formats based on integrated representations of auditory and visual signals. Third, confidence in correct responses involved similar electrophysiological markers for visual and audiovisual tasks that are associated with motor preparation preceding the perceptual judgment. We conclude that the supramodality of metacognition relies on supramodal confidence estimates and decisional signals that are shared across sensory modalities. Metacognitive monitoring is the capacity to access, report, and regulate one's own mental states. In perception, this allows rating our confidence in what we have seen, heard, or touched. Although metacognitive monitoring can operate on different cognitive domains, we ignore whether it involves a single supramodal mechanism common to multiple cognitive domains or modality-specific mechanisms idiosyncratic to each domain. Here, we bring evidence in favor of the supramodality hypothesis by showing that participants with high metacognitive performance in one modality are likely to perform well in other modalities. Based on computational modeling and electrophysiology, we propose that supramodality can be explained by the existence of supramodal confidence estimates and by the influence of decisional cues on confidence estimates.
In the study of nonconscious processing, different methods have been used in order to render stimuli invisible. While their properties are well described, the level at which they disrupt nonconscious processing remains unclear. Yet, such accurate estimation of the depth of nonconscious processes is crucial for a clear differentiation between conscious and nonconscious cognition. Here, we compared the processing of facial expressions rendered invisible through gaze-contingent crowding (GCC), masking, and continuous flash suppression (CFS), three techniques relying on different properties of the visual system. We found that both pictures and videos of happy faces suppressed from awareness by GCC were processed such as to bias subsequent preference judgments. The same stimuli manipulated with visual masking and CFS did not bias significantly preference judgments, although they were processed such as to elicit perceptual priming. A significant difference in preference bias was found between GCC and CFS, but not between GCC and masking. These results provide new insights regarding the nonconscious impact of emotional features, and highlight the need for rigorous comparisons between the different methods employed to prevent perceptual awareness.
Recent studies have highlighted the role of multisensory integration as a key mechanism of self-consciousness. In particular, integration of bodily signals within the peripersonal space (PPS) underlies the experience of the self in a body we own (self-identification) and that is experienced as occupying a specific location in space (self-location), two main components of bodily self-consciousness (BSC). Experiments investigating the effects of multisensory integration on BSC have typically employed supra-threshold sensory stimuli, neglecting the role of unconscious sensory signals in BSC, as tested in other consciousness research. Here, we used psychophysical techniques to test whether multisensory integration of bodily stimuli underlying BSC also occurs for multisensory inputs presented below the threshold of conscious perception. Our results indicate that visual stimuli rendered invisible through continuous flash suppression boost processing of tactile stimuli on the body (Exp. 1), and enhance the perception of near-threshold tactile stimuli (Exp. 2), only once they entered PPS. We then employed unconscious multisensory stimulation to manipulate BSC. Participants were presented with tactile stimulation on their body and with visual stimuli on a virtual body, seen at a distance, which were either visible or rendered invisible. We found that participants reported higher self-identification with the virtual body in the synchronous visuo-tactile stimulation (as compared to asynchronous stimulation; Exp. 3), and shifted their self-location toward the virtual body (Exp.4), even if stimuli were fully invisible. Our results indicate that multisensory inputs, even outside of awareness, are integrated and affect the phenomenological content of self-consciousness, grounding BSC firmly in the field of psychophysical consciousness studies.
Hallucinations in Parkinson’s disease (PD) are disturbing and frequent non-motor symptoms and constitute a major risk factor for psychosis and dementia. We report a robotics-based approach applying conflicting sensorimotor stimulation, enabling the induction of presence hallucinations (PHs) and the characterization of a subgroup of patients with PD with enhanced sensitivity for conflicting sensorimotor stimulation and robot-induced PH. We next identify the fronto-temporal network of PH by combining MR-compatible robotics (and sensorimotor stimulation in healthy participants) and lesion network mapping (neurological patients without PD). This PH-network was selectively disrupted in an additional and independent cohort of patients with PD, predicted the presence of symptomatic PH, and associated with cognitive decline. These robotics-neuroimaging findings extend existing sensorimotor hallucination models to PD and reveal the pathological cortical sensorimotor processes of PH in PD, potentially indicating a more severe form of PD that has been associated with psychosis and cognitive decline.
Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter “b” or “m,” respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit “6” or “8,” respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.