Web-based experiments using visual stimuli have become increasingly common in recent years, but many frequently-used stimuli in vision research have yet to be developed for online platforms. Here, we introduce the first open access random-dot kinematogram (RDK) for use in web browsers. This fully customizable RDK offers options to implement several different types of noise (random position, random walk, random direction) and parameters to control aperture shape, coherence level, the number of dots, and other features. We include links to commented JavaScript code for easy implementation in web-based experiments, as well as an example of how this stimulus can be integrated as a plugin with a JavaScript library for online studies (jsPsych).
Previous work has sought to understand decision confidence as a prediction of the probability that a decision will be correct, leading to debate over whether these predictions are optimal, and whether they rely on the same decision variable as decisions themselves. This work has generally relied on idealized, low-dimensional modeling frameworks, such as signal detection theory or Bayesian inference, leaving open the question of how decision confidence operates in the domain of high-dimensional, naturalistic stimuli. To address this, we developed a deep neural network model optimized to assess decision confidence directly given high-dimensional inputs such as images. The model naturally accounts for a number of puzzling dissociations between decisions and confidence, suggests a principled explanation of these dissociations in terms of optimization for the statistics of sensory inputs, and makes the surprising prediction that, despite these dissociations, decisions and confidence depend on a common decision variable.
Confidence can dissociate from perceptual accuracy, suggesting distinct computational and neural processes underlie these psychological functions. Recent investigations have therefore sought to experimentally isolate metacognitive processes by creating conditions where perceptual sensitivity is matched but confidence differs (“matched-performance / different-confidence”; MPDC). Despite these endeavors’ success, much remains unknown about MPDC effects and how to best harness them in experimental settings. Here we developed a principled approach to comprehensively characterizing MPDC effects through analyzing metaperceptual (i.e., type 2 psychometric) functions relating objective performance to subjective confidence across widely varying performance levels and experimental manipulations. We found that MPDC effect magnitude depends on stimulus properties, observers’ sensitivity level, and critically on trial type order (blocked or interleaved across stimulus property variations). Our findings provide the first comprehensive exploration of MPDC effects, offer a prescriptive guide to metaperceptual analysis, and suggest optimal experimental paradigms for experimentally isolating metacognition and awareness in future studies.
BACKGROUND AND PURPOSE: Vestibular symptoms are common after concussion. Vestibular Ocular Motor Screening identifies vestibular impairment, including postconcussive visual motion sensitivity, though the underlying functional brain alterations are not defined. We hypothesized that alterations in multisensory processing are responsible for postconcussive visual motion sensitivity, are detectable on fMRI, and correlate with symptom severity. MATERIALS AND METHODS:Twelve patients with subacute postconcussive visual motion sensitivity and 10 healthy control subjects underwent vestibular testing and a novel fMRI visual-vestibular paradigm including 30-second "neutral" or "provocative" videos. The presence of symptoms/intensity was rated immediately after each video. fMRI group-level analysis was performed for a "provocative-neutral" condition. Z-statistic images were nonparametrically thresholded using clusters determined by Z . 2.3 and a corrected cluster significance threshold of P ¼ .05. Symptoms assessed on Vestibular Ocular Motor Screening were correlated with fMRI mean parameter estimates using Pearson correlation coefficients.RESULTS: Subjects with postconcussive visual motion sensitivity had significantly more Vestibular Ocular Motor Screening abnormalities and increased symptoms while viewing provocative videos. While robust mean activation in the primary and secondary visual areas, the parietal lobe, parietoinsular vestibular cortex, and cingulate gyrus was seen in both groups, selective increased activation was seen in subjects with postconcussive visual motion sensitivity in the primary vestibular/adjacent cortex and inferior frontal gyrus, which are putative multisensory visual-vestibular processing centers. Moderate-to-strong correlations were found between Vestibular Ocular Motor Screening scores and fMRI activation in the left frontal eye field, left middle temporal visual area, and right posterior hippocampus.CONCLUSIONS: Increased fMRI brain activation in visual-vestibular multisensory processing regions is selectively seen in patients with postconcussive visual motion sensitivity and is correlated with Vestibular Ocular Motor Screening symptom severity, suggesting that increased visual input weighting into the vestibular network may underlie postconcussive visual motion sensitivity.
Some researchers have argued that normal human observers can exhibit “blindsight-like” behavior: the ability to discriminate or identify a stimulus without being aware of it. However, we recently used a bias-free task to show that what looks like blindsight may in fact be an artifact of typical experimental paradigms’ susceptibility to response bias. While those findings challenge previous reports of blindsight in normal observers, they do not rule out the possibility that different stimuli or techniques could still reveal perception without awareness. One intriguing candidate is emotion processing, since processing of emotional stimuli (e.g. fearful/happy faces) has been reported to potentially bypass conscious visual circuits. Here we used the bias-free blindsight paradigm to investigate whether emotion processing might reveal “featural blindsight,” i.e. ability to identify a face’s emotion without introspective access to the task-relevant features that led to the discrimination decision. However, we saw no evidence for emotion processing “featural blindsight”: as before, whenever participants could identify a face’s emotion they displayed introspective access to the task-relevant features, matching predictions of a Bayesian ideal observer. These results add to the growing body of evidence that perceptual discrimination ability without introspective access may not be possible for neurologically intact observers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.