Understanding how people rate their confidence is critical for characterizing a wide range of perceptual, memory, motor, and cognitive processes. To enable the continued exploration of these processes, we created a large database of confidence studies spanning a broad set of paradigms, participant populations, and fields of study. The data from each study are structured in a common,
We can make exquisitely precise movements without the apparent need for conscious monitoring. But can we monitor the low-level movement parameters when prompted? And what are the mechanisms that allow us to monitor our movements? To answer these questions, we designed a semivirtual ball throwing task. On each trial, participants first threw a virtual ball by moving their arm (with or without visual feedback, or replayed from a previous trial) and then made a two-alternative forced choice on the resulting ball trajectory. They then rated their confidence in their decision. We measured metacognitive efficiency using meta-d 0 /d 0 and compared it between different informational domains of the first-order task (motor, visuomotor or visual information alone), as well as between two different versions of the task based on different parameters of the movement: proximal (position of the arm) or distal (resulting trajectory of the ball thrown). We found that participants were able to monitor their performance based on distal motor information as well as when proximal information was available. Their metacognitive efficiency was also equally high in conditions with different sources of information available. The analysis of correlations across participants revealed an unexpected result: While metacognitive efficiency correlated between informational domains (which would indicate domain-generality of metacognition), it did not correlate across the different parameters of movement. We discuss possible sources of this discrepancy and argue that specific first-order task demands may play a crucial role in our metacognitive ability and should be considered when making inferences about domain-generality based on correlations.
were supported by DFG, German Research Foundation (project number 222641018 -SFB/TTR 135). Data, code and pre-registration protocols are available at https://osf.io/kyhu7/ (Experiment 1) and https://osf.io/sy342/ (Experiment 2). The data discussed in this article were first published in "The Confidence Database", https://osf.io/s46pr/ (Rahnev et al., 2020). We have no known conflict of interest to disclose. AbstractWe can make exquisitely precise movements without the apparent need for conscious monitoring.But can we monitor the low-level movement parameters when prompted? And what are the mechanisms that allow us to monitor our movements? To answer these questions, we designed a semi-virtual ball throwing task. On each trial, participants first threw a virtual ball by moving their arm (with or without visual feedback, or replayed from a previous trial) and then made a twoalternative forced choice on the resulting ball trajectory. They then rated their confidence in their decision. We measured metacognitive efficiency using meta-d'/d' and compared it between different informational domains of the first-order task (motor, visuomotor or visual information alone), as well as between two different versions of the task based on different parameters of the movement: proximal (position of the arm) or distal (resulting trajectory of the ball thrown).We found that participants were able to monitor their performance based on distal motor information as well as when proximal information was available. Their metacognitive efficiency was also equally high in conditions with different sources of information available. The analysis of correlations across participants revealed an unexpected result: while metacognitive efficiency correlated between informational domains (which would indicate domain-generality of metacognition), it did not correlate across the different parameters of movement. We discuss possible sources of this discrepancy and argue that specific first-order task demands may play a crucial role in our metacognitive ability and should be considered when making inferences about domain-generality based on correlations.
Confidence judgements are a central tool in metacognition research. In a typical task, participants first perform perceptual (first-order) decisions and then rate their confidence in these decisions. The relationship between confidence and first-order accuracy is taken as a measure of metacognitive performance. Confidence is often assumed to stem from decision-monitoring processes alone, but processes that co-occur with the first-order decision may also play a role in confidence formation. In fact, some recent studies have revealed that directly manipulating motor regions in the brain, or the time of first-order decisions relative to second-order ones affects confidence judgements. This finding suggests that confidence could be informed by a readout of reaction times in addition to decision-monitoring processes. To test this possibility, we assessed the contribution of response-related signals to confidence and, in particular, to metacognitive performance (i.e., a measure of the adequacy of these confidence judgements). In human volunteers, we measured the effect of making an overt (vs. covert) decision, as well as the effect of pairing an action to the stimulus about which the first-order decision is made. Against our expectations, we found no differences in overall confidence or metacognitive performance when first-order responses were covert as opposed to overt. Further, actions paired to visual stimuli presented led to higher confidence ratings, but did not affect metacognitive performance. These results suggest that confidence ratings do not always incorporate motor information. 4 Significance statement To measure metacognition, or the ability to monitor one s own thoughts, experimental tasks often require human volunteers to, first, make a perceptual decision (first-order task") and, then, rate their confidence in their own decision (second-order task"). In this paradigm, both first and second-order information could, in principle, influence confidence judgements. But only the latter is truly metacognitive. To determine whether confidence is a valid metacognitive measure, we compared confidence ratings between two conditions: with overt responses, where participants provided both first-and second-order responses; and with covert responses where participants reported their confidence in a decision that they had not executed. Removing first-order decisions did not affect confidence, which validates confidence as an introspective measure.
Understanding how people rate their confidence is critical for characterizing a wide range of perceptual, memory, motor, and cognitive processes. However, as in many other fields, progress has been slowed by the difficulty of collecting new data and the unavailability of existing data. To address this issue, we created a large database of confidence studies spanning a broad set of paradigms, participant populations, and fields of study. The data from each study are structured in a common, easy-to-use format that can be easily imported and analyzed in multiple software packages. Each dataset is further accompanied by an explanation regarding the nature of the collected data. At the time of publication, the Confidence Database (available at osf.io/s46pr) contained 145 datasets with data from over 8,700 participants and almost 4 million trials. The database will remain open for new submissions indefinitely and is expected to continue to grow. We show the usefulness of this large collection of datasets in four different analyses that provide precise estimation for several foundational confidence-related effects and lead to new findings that depend on the availability of large quantity of data. This Confidence Database will continue to enable new discoveries and can serve as a blueprint for similar databases in related fields.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.