To best interact with the external world, humans are often required to consider the quality of their actions. Sometimes the environment furnishes rewards or punishments to signal action efficacy. However, when such feedback is absent or only partial, we must rely on internally generated signals to evaluate our performance (i.e., metacognition). Yet, very little is known about how humans form such judgements of sensorimotor confidence. Do they monitor their performance? Or do they rely on cues to sensorimotor uncertainty to infer how likely it is they performed well? We investigated motor metacognition in two visuomotor tracking experiments, where participants followed an unpredictably moving dot cloud with a mouse cursor as it followed a random trajectory. Their goal was to infer the underlying target generating the dots, track it for several seconds, and then report their confidence in their tracking as better or worse than their average. In Experiment 1, we manipulated task difficulty with two methods: varying the size of the dot cloud and varying the stability of the target's velocity. In Experiment 2, the stimulus statistics were fixed and duration of the stimulus presentation was varied. We found similar levels of metacognitive sensitivity in all experiments, with the temporal analysis revealing a recency effect, where error later in the trial had a greater influence on the sensorimotor confidence. In sum, these results indicate humans predominantly monitor their tracking performance, albeit inefficiently, to judge sensorimotor confidence.
Perceptual confidence is an important internal signal about the certainty of our decisions and there is a substantial debate on how it is computed. We highlight three confidence metric types from the literature: observers either use 1) the full probability distribution to compute probability correct (Probability metrics), 2) point estimates from the perceptual decision process to estimate uncertainty (Evidence-Strength metrics), or 3) heuristic confidence from stimulus-based cues to uncertainty (Heuristic metrics). These metrics are rarely tested against one another, so we examined models of all three types on a suprathreshold spatial discrimination task. Observers were shown a cloud of dots sampled from a dot generating distribution and judged if the mean of the distribution was left or right of centre. In addition to varying the horizontal position of the mean, there were two sensory uncertainty manipulations: the number of dots sampled and the spread of the generating distribution. After every two perceptual decisions, observers made a confidence forced-choice judgement whether they were more confident in the first or second decision. Model results showed that the majority of observers were best-fit by either: 1) the Heuristic model, which used dot cloud position, spread, and number of dots as cues; or 2) an Evidence-Strength model, which computed the distance between the sensory measurement and discrimination criterion, scaled according to sensory uncertainty. An accidental repetition of some sessions also allowed for the measurement of confidence agreement for identical pairs of stimuli. This N-pass analysis revealed that human observers were more consistent than their best-fitting model would predict, indicating there are still aspects of confidence that are not captured by our modelling. As such, we propose confidence agreement as a useful technique for computational studies of confidence. Taken together, these findings highlight the idiosyncratic nature of confidence computations for complex decision contexts and the need to consider different potential metrics and transformations in the confidence computation.
Despite the tangible progress in psychological and cognitive sciences over the last several years, these disciplines still trail other more mature sciences in identifying the most important questions that need to be solved. Reaching such consensus could lead to greater synergy across different laboratories, faster progress, and increased focus on solving important problems rather than pursuing isolated, niche efforts. Here, 26 researchers from the field of visual metacognition reached consensus on four long-term and two medium-term common goals. We describe the process that we followed, the goals themselves, and our plans for accomplishing these goals. If this effort proves successful within the next few years, such consensus building around common goals could be adopted more widely in psychological science.
Priors and payo↵s are known to a↵ect perceptual decision-making, but little is understood about how they influence confidence judgments. For optimal perceptual decision-making, both priors and payo↵s should be considered when selecting a response. However, for confidence to reflect the probability of being correct in a perceptual decision, priors should a↵ect confidence but payo↵s should not. To experimentally test whether human observers follow this normative behavior for natural confidence judgments, we conducted an orientation-discrimination task with varied priors and payo↵s that probed both perceptual and metacognitive decision-making. The placement of discrimination and confidence criteria were examined according to several plausible Signal Detection Theory models. In the normative model, observers use the optimal discrimination criterion (i.e., the criterion that maximizes expected gain) and confidence criteria that shift with the discrimination criterion that maximizes accuracy (i.e., are not a↵ected by payo↵s). No observer was consistent with this model, with the majority exhibiting non-normative confidence behavior. One subset of observers ignored both priors and payo↵s for confidence, always fixing the confidence criteria around the neutral discrimination criterion. The other group of observers incorrectly incorporated payo↵s into their confidence by always shifting their confidence criteria with the same gains-maximizing criterion used for discrimination. Such metacognitive mistakes could have negative consequences outside the 2 laboratory setting, particularly when priors or payo↵s are not matched for all the possible decision alternatives.
We tested whether fast flicker can capture attention using eight flicker frequencies from 20–96 Hz, including several too high to be perceived (>50 Hz). Using a 480 Hz visual display rate, we presented smoothly sampled sinusoidal temporal modulations at: 20, 30, 40, 48, 60, 69, 80, and 96 Hz. We first established flicker detection rates for each frequency. Performance was at or near ceiling until 48 Hz and dropped sharply to chance level at 60 Hz and above. We then presented the same flickering stimuli as pre-cues in a visual search task containing five elements. Flicker location varied randomly and was therefore congruent with target location on 20% of trials. Comparing congruent and incongruent trials revealed a very strong congruency effect (faster search for cued targets) for all detectable frequencies (20–48 Hz) but no effect for faster flicker rates that were detected at chance. This pattern of results (obtained with brief flicker cues: 58 ms) was replicated for long flicker cues (1000 ms) intended to allow for entrainment to the flicker frequency. These results indicate that only visible flicker serves as an exogenous attentional cue and that flicker rates too high to be perceived are completely ineffective.
Integration of sensory information across multiple senses is most likely to occur when signals are spatiotemporally coupled. Yet, recent research on audiovisual rate discrimination indicates that random sequences of light flashes and auditory clicks are integrated optimally regardless of temporal correlation. This may be due to 1) temporal averaging rendering temporal cues less effective; 2) difficulty extracting causal-inference cues from rapidly presented stimuli; or 3) task demands prompting integration without concern for the spatiotemporal relationship between the signals. We conducted a rate-discrimination task (Exp 1), using slower, more random sequences than previous studies, and a separate causal-judgement task (Exp 2). Unisensory and multisensory rate-discrimination thresholds were measured in Exp 1 to assess the effects of temporal correlation and spatial congruence on integration. The performance of most subjects was indistinguishable from optimal for spatiotemporally coupled stimuli, and generally sub-optimal in other conditions, suggesting observers used a multisensory mechanism that is sensitive to both temporal and spatial causal-inference cues. In Exp 2, subjects reported whether temporally uncorrelated (but spatially co-located) sequences were perceived as sharing a common source. A unified percept was affected by click-flash pattern similarity and the maximum temporal offset between individual clicks and flashes, but not on the proportion of synchronous click-flash pairs. A simulation analysis revealed that the stimulus-generation algorithms of previous studies is likely responsible for the observed integration of temporally independent sequences. By combining results from Exps 1 and 2, we found better rate-discrimination performance for sequences that are more likely to be integrated than those that are not. Our results support the principle that multisensory stimuli are optimally integrated when spatiotemporally coupled, and provide insight into the temporal features used for coupling in causal inference.
“Representational Momentum” (RM) is a mislocalization of the endpoint of a moving target in the direction of motion. In vision, RM has been shown to increase with target velocity. In audition, however, the effect of target velocity is unclear. Using a perceptual paradigm with moving broadband noise targets in Virtual Auditory Space resulted in a linear increase in RM from 0.9° to 2.3° for an increase in target velocity from 25°/s to 100°/s. Accounting for the effect of eye position also reduced variance. These results suggest that RM may be the result of similar underlying mechanisms in both modalities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.