Mathematical models of natural systems are abstractions of much more complicated processes. Developing informative and realistic models of such systems typically involves suitable statistical inference methods, domain expertise, and a modicum of luck. Except for cases where physical principles provide sufficient guidance, it will also be generally possible to come up with a large number of potential models that are compatible with a given natural system and any finite amount of data generated from experiments on that system. Here we develop a computational framework to systematically evaluate potentially vast sets of candidate differential equation models in light of experimental and prior knowledge about biological systems. This topological sensitivity analysis enables us to evaluate quantitatively the dependence of model inferences and predictions on the assumed model structures. Failure to consider the impact of structural uncertainty introduces biases into the analysis and potentially gives rise to misleading conclusions.robustness analysis | biological networks | network inference | dynamical systems U sing simple models to study complex systems has become standard practice in different fields, including systems biology, ecology, and economics. Although we know and accept that such models do not fully capture the complexity of the underlying systems, they can nevertheless provide meaningful predictions and insights (1). A successful model is one that captures the key features of the system while omitting extraneous details that hinder interpretation and understanding. Constructing such a model is usually a nontrivial task involving stages of refinement and improvement.When dealing with models that are (necessarily and by design) gross oversimplifications of the reality they represent, it is important that we are aware of their limitations and do not seek to overinterpret them. This is particularly true when modeling complex systems for which there are only limited or incomplete observations. In such cases, we expect there to be numerous models that would be supported by the observed data, many (perhaps most) of which we may not yet have identified. The literature is awash with papers in which a single model is proposed and fitted to a dataset, and conclusions drawn without any consideration of (i) possible alternative models that might describe the observed behavior and known facts equally well (or even better); or (ii) whether the conclusions drawn from different models (still consistent with current observations) would agree with one another.We propose an approach to assess the impact of uncertainty in model structure on our conclusions. Our approach is distinct from-and complementary to-existing methods designed to address structural uncertainty, including model selection, model averaging, and ensemble modeling (2-9). Analogous to parametric sensitivity analysis (PSA), which assesses the sensitivity of a model's behavior to changes in parameter values, we consider the sensitivity of a model's output to c...