2018
DOI: 10.1017/s1930297500009219
|View full text |Cite
|
Sign up to set email alerts
|

Predicting elections: Experts, polls, and fundamentals

Abstract: This study analyzes the relative accuracy of experts, polls, and the so-called ‘fundamentals’ in predicting the popular vote in the four U.S. presidential elections from 2004 to 2016. Although the majority (62%) of 452 expert forecasts correctly predicted the directional error of polls, the typical expert’s vote share forecast was 7% (of the error) less accurate than a simple polling average from the same day. The results further suggest that experts follow the polls and do not sufficiently harness information… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(3 citation statements)
references
References 29 publications
0
1
0
Order By: Relevance
“…The ACE-IDEA data set (Hanea et al, 2021) includes forecasts on 155 events, but on average each forecaster only replied to about 19 queries. Other data sets consist of less queries to be predicted or answered (Graefe, 2018;Hanea et al, 2021;Karvetski et al, 2013;Prelec et al, 2017), of which the highest number of queries is about 80 (Prelec et al, 2017). However, 80 answers per forecaster is still a small number for modeling the forecasters' behavior, particularly if we divide the data into training and test sets and model the answers to true/false queries separately.…”
Section: Figure 1 Graphical Models Of One Of the Models Proposed Bymentioning
confidence: 99%
See 1 more Smart Citation
“…The ACE-IDEA data set (Hanea et al, 2021) includes forecasts on 155 events, but on average each forecaster only replied to about 19 queries. Other data sets consist of less queries to be predicted or answered (Graefe, 2018;Hanea et al, 2021;Karvetski et al, 2013;Prelec et al, 2017), of which the highest number of queries is about 80 (Prelec et al, 2017). However, 80 answers per forecaster is still a small number for modeling the forecasters' behavior, particularly if we divide the data into training and test sets and model the answers to true/false queries separately.…”
Section: Figure 1 Graphical Models Of One Of the Models Proposed Bymentioning
confidence: 99%
“…While statistical models are usually limited in applicability by requiring sufficiently large and complete data sets, human forecasts can overcome this limitation taking advantage of human experience and intuition (Clemen, 1989;Clemen & Winkler, 1986;Genest & Zidek, 1986). The probability estimates can be given either as forecasts of events, e.g., rain probabilities in meteorological science or probabilities for the outcomes of geopolitical events such as elections (Graefe, 2018;Turner et al, 2014), other binary classifications, or the quantification of the experts' confidence on a prediction or the answer to a specific question (Karvetski et al, 2013;Prelec et al, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…Other research has previously explored the ways that automated approaches can benefit human forecasting in domains spanning judicial decisions (Jung et al, 2020), meteorology (Yu et al, 2011), health care (Goldstein et al, 2017), and defense (Scharre, 2016). Such methods range from simple averaging of human and model forecasts (Graefe, 2018), to expert-informed feature selection in regression models (Jung et al, 2020), to hybrid prediction markets (Nagar & Malone, 2012). The line between human and machine is often blurry: algorithms may incorporate human judgments as inputs (e.g., an interviewer’s ratings of job candidates), and human experts may decide the form and content of an algorithm.…”
mentioning
confidence: 99%