2023
DOI: 10.1371/journal.pone.0274429
|View full text |Cite
|
Sign up to set email alerts
|

Predicting reliability through structured expert elicitation with the repliCATS (Collaborative Assessments for Trustworthy Science) process

Abstract: As replications of individual studies are resource intensive, techniques for predicting the replicability are required. We introduce the repliCATS (Collaborative Assessments for Trustworthy Science) process, a new method for eliciting expert predictions about the replicability of research. This process is a structured expert elicitation approach based on a modified Delphi technique applied to the evaluation of research claims in social and behavioural sciences. The utility of processes to predict replicability… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 10 publications
(16 citation statements)
references
References 45 publications
0
12
0
Order By: Relevance
“…Although we have emphasized the utility of coherence-weighting as a tool that can be used even in data-poor environments, nothing prevents decision makers from combining it with other techniques to further improve the accuracy of aggregated estimates such as other performance-weighted methods (Bolger & Rowe, 2015; Budescu & Chen, 2015; Clemen & Winkler, 1999; Collins et al, in press; Himmelstein et al, 2021). Furthermore, practitioners can combine ensembles of methods such as competitive (Lichtendahl et al, 2013) or structured (Fraser et al, 2023) elicitation methods; enhancing the salience of private versus public information (Larrick et al, 2012), choosing smaller, wiser crowds (Soll et al, 2010); trimming opinion pools to account for under- and overconfidence (Jose et al, 2014; Yaniv, 1997); up-weighting assessors who update estimates frequently in small increments (Atanasov et al, 2020); or extremizing judgments (Baron et al, 2014; Hanea et al, 2021; Satopää & Ungar, 2015) to further improve accuracy. The latter is particularly appealing in general-knowledge tasks where the “outcomes” are, by definition, extreme.…”
Section: Discussionmentioning
confidence: 99%
“…Although we have emphasized the utility of coherence-weighting as a tool that can be used even in data-poor environments, nothing prevents decision makers from combining it with other techniques to further improve the accuracy of aggregated estimates such as other performance-weighted methods (Bolger & Rowe, 2015; Budescu & Chen, 2015; Clemen & Winkler, 1999; Collins et al, in press; Himmelstein et al, 2021). Furthermore, practitioners can combine ensembles of methods such as competitive (Lichtendahl et al, 2013) or structured (Fraser et al, 2023) elicitation methods; enhancing the salience of private versus public information (Larrick et al, 2012), choosing smaller, wiser crowds (Soll et al, 2010); trimming opinion pools to account for under- and overconfidence (Jose et al, 2014; Yaniv, 1997); up-weighting assessors who update estimates frequently in small increments (Atanasov et al, 2020); or extremizing judgments (Baron et al, 2014; Hanea et al, 2021; Satopää & Ungar, 2015) to further improve accuracy. The latter is particularly appealing in general-knowledge tasks where the “outcomes” are, by definition, extreme.…”
Section: Discussionmentioning
confidence: 99%
“…We recruited 99 experts from a pool of people who had previously participated in at least one repliCATS workshop or remote process for evaluating research claims, described in Fraser et al (2023). Each expert participant was awarded a $200USD grant to assess 10 research claims for this study.…”
Section: Participantsmentioning
confidence: 99%
“…Participants evaluated their assigned claims on an online platform developed for the repliCATS project (Pearson 2020;Fraser et al 2023;Fig M1) implementing the 'IDEA' protocol (Hanea et al, 2017;Hemming et al, 2018a;Hemming et al, 2018b, Fig. 1).…”
Section: Procedures: Elicitation Of Replication Outcomesmentioning
confidence: 99%
See 1 more Smart Citation
“…Details of these prompts are given in the electronic supplementary material. Data were analysed using a subset of analytic categories (codes) developed through qualitative content-analysis techniques during the repliCATS project [31,47]. These predefined codes were collected in the 'Known-Outcome Codebook', along with inclusion and exclusion criteria, to guide analysts in interpreting text instances with respect to relevant codes.…”
Section: Qualitative Analysismentioning
confidence: 99%