2016
DOI: 10.1017/s1930297500004599
|View full text |Cite
|
Sign up to set email alerts
|

Developing expert political judgment: The impact of training and practice on judgmental accuracy in geopolitical forecasting tournaments

Abstract: The heuristics-and-biases research program highlights reasons for expecting people to be poor intuitive forecasters. This article tests the power of a cognitive-debiasing training module (“CHAMPS KNOW”) to improve probability judgments in a four-year series of geopolitical forecasting tournaments sponsored by the U.S. intelligence community. Although the training lasted less than one hour, it consistently improved accuracy (Brier scores) by 6 to 11% over the control condition. Cognitive ability and practice al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
30
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 67 publications
(37 citation statements)
references
References 90 publications
2
30
0
Order By: Relevance
“…These large gains in accuracy are in line with prior research, which showed that reference class forecasts and base rates are one of the most effective tools for debiasing judgmental forecasts (Chang, Chen, Mellers & Tetlock, 2016). Experts in any field should refrain from focusing too much on the specifics of a situation ("this time is different") but also take the outside view (Lovallo & Kahneman, 2003).…”
Section: Discussionsupporting
confidence: 73%
“…These large gains in accuracy are in line with prior research, which showed that reference class forecasts and base rates are one of the most effective tools for debiasing judgmental forecasts (Chang, Chen, Mellers & Tetlock, 2016). Experts in any field should refrain from focusing too much on the specifics of a situation ("this time is different") but also take the outside view (Lovallo & Kahneman, 2003).…”
Section: Discussionsupporting
confidence: 73%
“…This conclusion suggests that efforts might be better focused on numeracy education. Such education could focus on how to update probabilistic beliefs more coherently (Mandel, 2015b) and use comparison classes (Chang et al, 2016) as well as on overcoming popular misconceptions about quantifying uncertainty, such as the view that assigning numbers to probabilities implies they are scientific estimates (Mandel & Irwin, 2020).…”
Section: Policy Implicationsmentioning
confidence: 99%
“…That said, the process vs. outcome question is of fundamental social-organizational-political interest -so there is a need to encourage work on it. We made a good faith effort to construct a form of process accountability, organized around a training system that did repeatedly work in this task environment (Chang et al, 2016). We urged people to use their judgment in deciding which guidelines to stress for particular problems -and we stressed that the quality of one's explanation for one's forecast would be the sole basis for judging performance, not the accuracy of the forecast.…”
Section: Defining Process Outcome and Hybrid Accountabilitymentioning
confidence: 99%
“…We urged people to use their judgment in deciding which guidelines to stress for particular problems -and we stressed that the quality of one's explanation for one's forecast would be the sole basis for judging performance, not the accuracy of the forecast. In this sense, the process accountability manipulation resembled the rather open-ended process manipulations used in the lab literature on accountability, which have been found to be moderately effective in reducing certain biases (Lerner & Tetlock, 1999;Chang et al, 2016). As we explain later, process accountable forecasters had two process-specific opportunities to improve their scores in ways that were not incentivized for pure outcome forecasters: first, they could be more diligent at the actions supporting forecasting and second they could more fully utilize what they learned during initial forecasting training.…”
Section: Defining Process Outcome and Hybrid Accountabilitymentioning
confidence: 99%
See 1 more Smart Citation