2017
DOI: 10.2139/ssrn.2943297
|View full text |Cite
|
Sign up to set email alerts
|

Agent-Based Model Calibration Using Machine Learning Surrogates

Abstract: Taking agent-based models (ABM) closer to the data is an open challenge. This paper explicitly tackles parameter space exploration and calibration of ABMs combining supervised machine-learning and intelligent sampling to build a surrogate meta-model. The proposed approach provides a fast and accurate approximation of model behaviour, dramatically reducing computation time. In that, our machine-learning surrogate facilitates large scale explorations of the parameter-space, while providing a powerful filter to g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 21 publications
(17 citation statements)
references
References 102 publications
(73 reference statements)
0
17
0
Order By: Relevance
“…Calibrating more outputs could be approached by including them as additional objectives, which defines a new scenario where the use of many-objective EMO algorithms will be required. Besides we believe that surrogate fitness functions would be useful for future studies due to the high computational costs of simulating multiple times for every evaluation of a single model configuration [68].…”
Section: Practical Implications and Future Directionsmentioning
confidence: 99%
“…Calibrating more outputs could be approached by including them as additional objectives, which defines a new scenario where the use of many-objective EMO algorithms will be required. Besides we believe that surrogate fitness functions would be useful for future studies due to the high computational costs of simulating multiple times for every evaluation of a single model configuration [68].…”
Section: Practical Implications and Future Directionsmentioning
confidence: 99%
“…The main drawback of such a ‘one‐shot’ sampling approach is twofold: (i) there is no well‐established mechanism which enables the analyst to determine the sample size required to generate a metamodel with the desired accuracy level, and (ii) generating a sample with space‐filling property may be redundant for large regions of input space exhibiting homogeneous behaviour in terms of simulation model output. To overcome these weaknesses of ‘one‐shot’ designs, sequential sampling techniques (also known as adaptive sampling and active learning) are extensively used in the literature (Eason & Cremaschi, 2014; Edali & Yücel, 2019; Lamperti, Roventini, & Sani, 2018; Liu, Xu, & Wang, 2015; Xiong, Xiong, Chen, & Yang, 2009). Instead of generating a large ‘one‐shot’ sample for metamodel training, adaptive sequential sampling strategies expand the training set size in an iterative manner by utilizing the knowledge obtained in the earlier sampling and metamodel training iterations.…”
Section: Proposed Approachmentioning
confidence: 99%
“…The function inferred from the simulation input–output data can be considered as an estimated representation of the input–output relationship of the original SDM or IBM and is also called a metamodel, a surrogate model, a proxy, an emulator or a response surface (Kleijnen, 2009; Kleijnen, Sanchez, Lucas, & Cioppa, 2005; Kleijnen & Sargent, 2000). In an attempt towards proposing a systematic and, perhaps, automated model analysis approach for IBMs, several former studies utilize metamodels of different sorts, which guide the process of behaviour space exploration and/or of relationship identification between model inputs and outputs (Chérel, Cottineau, & Reuillon, 2015; Edali & Yücel, 2018; Edali & Yücel, 2019; Edmonds, Little, Lessard‐Phillips, & Fieldhouse, 2014; Lamperti, Roventini, & Sani, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…As discussed in , the curse of dimensionality makes the practical application of the tools discussed insofar nearly impossible for medium and large scale ABMs. To address this problem, Lamperti et al (2016b) have proposed to use machine learning surrogates to conveniently filter the parameter space of simulation models, dramatically reducing the computational effort needed to explore the behavior of the model when many parameters are at stake.…”
Section: Model Selection and Empirical Validationmentioning
confidence: 99%
“…Recent developments try to mitigate over-parameterization issues resorting to phase-diagrams (Gualdi et al, 2015), Kriging meta-modeling (Salle and Yıldızoglu, 2014;Dosi et al, 2016c;Bargigli et al, 2016), and machine-learning surrogates (Lamperti et al, 2016b). We shall briefly come back to these issues in the concluding remarks.…”
Section: Model Selection and Empirical Validationmentioning
confidence: 99%