2019
DOI: 10.1002/qre.2596
|View full text |Cite
|
Sign up to set email alerts
|

Improving expert forecasts in reliability: Application and evidence for structured elicitation protocols

Abstract: Quantitative expert judgements are used in reliability assessments to inform critically important decisions. Structured elicitation protocols have been advocated to improve expert judgements, yet their application in reliability is challenged by a lack of examples or evidence that they improve judgements. This paper aims to overcome these barriers. We present a case study where two world‐leading protocols, the IDEA protocol and the Classical Model, were combined and applied by the Australian Department of Defe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

4
4

Authors

Journals

citations
Cited by 23 publications
(28 citation statements)
references
References 59 publications
(193 reference statements)
0
28
0
Order By: Relevance
“…In the same geopolitical forecasting tournament, results from the Good Judgment Project team showed that trained teams of 'superforecasters' outperformed all other individuals, crowds, and other research teams (Mellers et al 2014;Mellers et al 2015a;Mellers et al 2015b), and when previously-validated aggregation techniques were applied, self-reported forecasts ('beliefs') of geopolitical events were at least as accurate as prediction-market prices (Dana et al 2019). Similar benefits of structured group elicitations and considered aggregation methods on improving judgement accuracy can be seen in environmental science (McBride et al 2012;Wintle et al 2013;Hemming et al 2018b) and engineering (Hemming et al 2020).…”
Section: Introductionmentioning
confidence: 67%
“…In the same geopolitical forecasting tournament, results from the Good Judgment Project team showed that trained teams of 'superforecasters' outperformed all other individuals, crowds, and other research teams (Mellers et al 2014;Mellers et al 2015a;Mellers et al 2015b), and when previously-validated aggregation techniques were applied, self-reported forecasts ('beliefs') of geopolitical events were at least as accurate as prediction-market prices (Dana et al 2019). Similar benefits of structured group elicitations and considered aggregation methods on improving judgement accuracy can be seen in environmental science (McBride et al 2012;Wintle et al 2013;Hemming et al 2018b) and engineering (Hemming et al 2020).…”
Section: Introductionmentioning
confidence: 67%
“…However, especially with remote elicitation, there will always be a risk that experts discover the sources of the data when forming their judgments (as occurred in [Hemming et al. ]).…”
Section: Discussionmentioning
confidence: 99%
“…However some consider that the benefits of feedback are greater than the disadvantage of introducing dependence. The evidence for such trade-offs is little in the context of eliciting uncertainty with groups of experts (Hanea et al, 2016;Wilson and Farrow, 2018), but is informed by research discussed in Hemming et al (2020a).…”
Section: Training Experts and Facilitatorsmentioning
confidence: 99%
“…A few IDEA elicited data sets suggest that experts tend to strongly anchor on their initial judgements and only adjust if they hear good reasons to do so (e.g., Hemming et al, 2018a;Hanea et al, 2018). In addition, experiments also show that this discussion can improve individual judgements, and usually improves group judgements (e.g., Hemming et al, 2020aHemming et al, , 2018aHanea et al, 2018). A very recent study (Williams et al, 2020) suggests that the group judgement obtained through SHELF in an elicitation with three experts was better than the three individual initial estimates.…”
Section: Training Experts and Facilitatorsmentioning
confidence: 99%