2017
DOI: 10.1177/0022343316682065
|View full text |Cite
|
Sign up to set email alerts
|

Do the robot

Abstract: Increasingly, scholars interested in understanding conflict processes have turned to evaluating out-of-sample forecasts to judge and compare the usefulness of their models. Research in this vein has made significant progress in identifying and avoiding the problem of overfitting sample data. Yet there has been less research providing strategies and tools to practically improve the out-of-sample performance of existing models and connect forecasting improvement to the goal of theory development in conflict stud… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
12
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 62 publications
(13 citation statements)
references
References 31 publications
1
12
0
Order By: Relevance
“…The ground truth can be retrieved via interviews with regional experts, media posts, and impacts of climatic events observed in quantitative data, to name a few examples (Bakkensen et al, 2017; Busby et al, 2018; Mach & Kraan, 2021; Visser et al, 2020). In this sense, the quantitative predictive tools can draw on out-of-sample heuristics, to evaluate the model performance on a test set not used for the model construction (Busby, 2018; Colaresi & Mahmood, 2017; Hegre et al, 2021). This could be advanced by feeding the model with new observations over time and validating against these new data points.…”
Section: Review Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…The ground truth can be retrieved via interviews with regional experts, media posts, and impacts of climatic events observed in quantitative data, to name a few examples (Bakkensen et al, 2017; Busby et al, 2018; Mach & Kraan, 2021; Visser et al, 2020). In this sense, the quantitative predictive tools can draw on out-of-sample heuristics, to evaluate the model performance on a test set not used for the model construction (Busby, 2018; Colaresi & Mahmood, 2017; Hegre et al, 2021). This could be advanced by feeding the model with new observations over time and validating against these new data points.…”
Section: Review Approachmentioning
confidence: 99%
“…All predictive tools conduct some out-of-sample predictions to validate the model performance (Ward & Beger, 2017). In this regard, ViEWS explicitly follows the guidelines by Colaresi and Mahmood (2017). WPS further benchmarks its model against others, including ViEWS (Kuzma et al, 2020).…”
Section: Review Findingsmentioning
confidence: 99%
“…Recently, various studies incorporating machine learning techniques, like for example [5][6][7], have indicated that certain machine learning methods can deliver accurate forecasts for social conflicts.…”
Section: Introductionmentioning
confidence: 99%
“…Ward, Greenhill, and Bakke [45] demonstrate that out-of-sample validation of predictive models is a useful heuristic tool for evaluating causal claims and policy guidance. Colaresi and Mahmood [46] reason that machine learning predictions of conflict can inform theory through the analysis of patterns in the data, mainly discrepancies between observed and modelled outcomes, allowing for nonlinearity and complex interactions between variables.…”
Section: Introductionmentioning
confidence: 99%