The proliferation of agent-based models (ABMs) in recent decades has motivated model practitioners to improve the transparency, replicability, and trust in results derived from ABMs. The complexity of ABMs has risen in stride with advances in computing power and resources, resulting in larger models with complex interactions and learning and whose outputs are often high-dimensional and require sophisticated analytical approaches. Similarly, the increasing use of data and dynamics in ABMs has further enhanced the complexity of their outputs. In this article, we offer an overview of the state-of-the-art approaches in analysing and reporting ABM outputs highlighting challenges and outstanding issues. In particular, we examine issues surrounding variance stability (in connection with determination of appropriate number of runs and hypothesis testing), sensitivity analysis, spatio-temporal analysis, visualization, and effective communication of all these to non-technical audiences, such as various stakeholders.
Many still view simulation models as a black box. This paper argues that perceptions could change if the systematic design of experiments (DOE) for simulation research was fully realized. DOE can increase (1) the transparency of simulation model behavior and (2) the effectiveness of reporting simulation results. Based on DOE principles, we develop a systematic procedure to guide the analysis of simulation models as well as concrete templates for sharing the results. A simulation model investigating the performance of learning algorithms in an economic mechanism design context illustrates our suggestions. Overall, the proposed systematic procedure for applying DOE principles complements current initiatives for a more standardized simulation research process.
Agent-based models (ABMs) are increasingly recognized as valuable tools in modelling humanenvironmental systems, but challenges and critics remain. One pressing challenge in the era of "Big Data" and given the flexibility of representation afforded by ABMs, is identifying the appropriate level of complicatedness in model structure for representing and investigating complex real-world systems. In this paper, we differentiate the concepts of complexity (model behaviour) and complicatedness (model structure), and illustrate the non-linear relationship between them. We then systematically evaluate the trade-offs between simple (often theoretical) models and complicated (often empirically-grounded) models. We propose using pattern-oriented modelling, stepwise approaches, and modular design to guide modellers in reaching an appropriate level of model complicatedness. While ABMs should be constructed as simple as possible but as complicated as necessary to address the predefined research questions, we also warn modellers of the pitfalls and risks of building "mid-level" models mixing stylized and empirical components.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.