The 'ODD' (Overview, Design concepts, and Details) protocol was published in 2006 to standardize the published descriptions of individual-based and agent-based models (ABMs). The primary objectives of ODD are to make model descriptions more understandable and complete, thereby making ABMs less subject to criticism for being irreproducible. We have systematically evaluated existing uses of the ODD protocol and identified, as expected, parts of ODD needing improvement and clarification. Accordingly, we revise the definition of ODD to clarify aspects of the original version and thereby facilitate future standardization of ABM descriptions. We discuss frequently raised critiques in ODD but also two emerging, and unanticipated, benefits: ODD improves the rigorous formulation of models and helps make the theoretical foundations of large models more visible. Although the protocol was designed for ABMs, it can help with documenting any large, complex model, alleviating some general objections against such models.
Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call patternoriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity. What makes James Bond an agent? He has a clear goal, he is autonomous in his decisions about achieving the goal, and he adapts these decisions to his rapidly changing situation. We are surrounded by such autonomous, adaptive agents: cells of the immune system, plants, citizens, stock market investors, businesses, etc. The agent-based complex systems (1) (ACSs) around us are made up of myriad interacting agents. One of the most important challenges confronting modern science is to understand and predict such systems. Bottom-up simulation modeling is one tool for doing so: We compile relevant information about entities at a lower level of the system (in Bagent-based models,[ these are individual agents), formulate theories about their behavior, implement these theories in a computer simulation, and observe the emergence of system-level properties related to particular questions (2, 3).Bottom-up models have been developed for many types of ACSs (4), but the identification of general principles underlying the organization of ACSs has been hampered by the lack of an explicit strategy for coping with the two main challenges of bottom-up modeling: complexity and uncertainty (5, 6). Consequently, model structure often is chosen ad hoc, and the focus is often on how to represent agents without sufficient emphasis on analyzing and validating the applicability of models to real problems (5, 7).A strategy called pattern-oriented modeling (POM) attempts to make bottom-up modeling more rigorous and comprehensive (6,(8)(9)(10). In POM, we explicitly follow the basic research program of science: the explanation of observed patterns (11). Patterns are defining characteristics of a system and often, therefore, indicators of essential underlying processes and structures. Patterns contain information on the internal organization of a system, but in a Bcoded[ form. The purpose of POM is to Bdecode[ this information (10).The motivation for POM is that, for complex systems, a single pattern observed at a specific scale and hierarchical level is not sufficient to reduce uncertainty in model structure and parameters. This has long been known in science. For example, Chargaff_s rule of DNA base pairing was not sufficient to decode the structure of DNA-until combined with patterns from x-ray...
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. This content downloaded from 128.235.251.160 on SunAbstract. A nonlinear function general enough to include the effects of feeding saturation and intraspecific consumer interference is used to represent the transfer of material or energy from one trophic level to another. The function agrees with some recent experimental data on feeding rates. A model using this feeding rate function is subjected to equilibrium and stability analyses to ascertain its mathematical implications. The analyses lead to several observations; for example, increases in maximum feeding rate may, under certain circumstances, result in decreases in consumer population and mutual interference between consumers is a major stabilizing factor in a nonlinear system. The analyses also suggest that realistic classes of consumer-resource models exist which do not obey Kolmogorov's Criteria but are nevertheless globally stable.
Landscape ecology deals with the patterning of ecosystems in space. Methods are needed to quantify aspects of spatial pattern that can be correlated with ecological processes. The present paper develops three indices of pattern derived from information theory and fractal geometry. Using digitized maps, the indices are calculated for 94 quadrangles covering most of the eastern United States. The indices are shown to be reasonably independent of each other and to capture major features of landscape pattern. One of the indices, the fractal dimension, is shown to be correlated with the degree of human manipulation of the landscape.
Saliva is a body fluid with important functions in oral and general health. A consortium of three research groups catalogued the proteins in human saliva collected as the ductal secretions: 1166 identifications-914 in parotid and 917 in submandibular/sublingual saliva-were made. The results showed that a high proportion of proteins that are found in plasma and/or tears are also present in saliva along with unique components. The proteins identified are involved in numerous molecular processes ranging from structural functions to enzymatic/catalytic activities. As expected, the majority mapped to the extracellular and secretory compartments. An immunoblot approach was used to validate the presence in saliva of a subset of the proteins identified by mass spectrometric approaches. These experiments focused on novel constituents and proteins for which the peptide evidence was relatively weak. Ultimately, information derived from the work reported here and related published studies can be used to translate blood-based clinical laboratory tests into a format that utilizes saliva. Additionally, a catalogue of the salivary proteome of healthy individuals allows future analyses of salivary samples from individuals with oral and systemic diseases, with the goal of identifying biomarkers with diagnostic and/or prognostic value for these conditions; another possibility is the discovery of therapeutic targets.
As the lipidomics field continues to advance, self-evaluation within the community is critical. Here, we performed an interlaboratory comparison exercise for lipidomics using Standard Reference Material (SRM) 1950-Metabolites in Frozen Human Plasma, a commercially available reference material. The interlaboratory study comprised 31 diverse laboratories, with each laboratory using a different lipidomics workflow. A total of 1,527 unique lipids were measured across all laboratories and consensus location estimates and associated uncertainties were determined for 339 of these lipids measured at the sum composition level by five or more participating laboratories. These evaluated lipids detected in SRM 1950 serve as community-wide benchmarks for intra- and interlaboratory quality control and method validation. These analyses were performed using nonstandardized laboratory-independent workflows. The consensus locations were also compared with a previous examination of SRM 1950 by the LIPID MAPS consortium. While the central theme of the interlaboratory study was to provide values to help harmonize lipids, lipid mediators, and precursor measurements across the community, it was also initiated to stimulate a discussion regarding areas in need of improvement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.