Successfully building a model requires a combination of expertise in the problem domain and in the practice of modeling and simulation (M&S). Model verification, validation, and testing (VV&T) are essential to the consistent production of models that are useful and correct. There are significant communities of domain experts that build and use models without employing dedicated modeling specialists. Current modeling tools relatively underserve these communities, particularly in the area of model testing and evaluation. This paper describes several techniques that modeling tools can use to support the domain expert in performing VV&T, and discusses the advantages and disadvantages of this approach to modeling.
Systems Biology is aimed at analyzing the behavior and interrelationships of biological systems and is characterized by combining experimentation, theory, and computation. Dedicated to exploring current challenges, the panel brings together people from a variety of disciplines whose perspectives illuminate diverse facets of Systems Biology and the challenges for modeling and simulation methods.
Today's macromolecular regulatory network models are small compared to the amount of information known about a particular cellular pathway, in part because current modeling languages and tools are unable to handle significantly larger models. Thus, most pathway modeling work today focuses on building small models of individual pathways since they are easy to construct and manage. The hope is someday to put these pieces together to create a more complete picture of the underlying molecular machinery. While efforts to make large models benefit from reusing existing components, unfortunately, there currently exists little tool or representational support for combining or composing models. We have identified four distinct modeling processes related to model composition: fusion, composition, aggregation, and flattening. We present concrete proposals for implementing all four processes in the context of the Systems Biology Markup Language (SBML). REGULATORY NETWORK MODELINGMacromolecular regulatory network models attempt to deduce the physiological properties of a cell from wiring diagrams of its control systems. These networks of interacting proteins are intrinsically dynamic. They describe the molecular mechanisms by which a cell changes in space and time to respond to stimuli, grow and reproduce, differentiate, and do all the other remarkable tricks that are necessary to stay alive and propagate the species.A simple example of a regulatory network is the set of reactions controlling the activity of MPF (mitosis promoting factor) in Xenopus oocyte extracts (Marlovits et al. 1998), which we refer to herein as the frog egg model (see Figure 1). Such networks are often represented as graphs where vertices represent substrates and products (collectively referred to as species), and labeled directed edges connecting vertices represent the reactions. Chemical reactions cause the concentrations of the chemical species (C i ) to change in time according to the equation where R is the number of reactions, v j is the velocity of the jth reaction in the network, and b i j is the stoichiometric coefficient of species i in reaction j (b i j < 0 for substrates, b i j > 0 for products, b i j = 0 if species i does not take part in reaction j). The full set of rate equations is a mathematical representation of the temporal behavior of the regulatory network. Modelers are faced with many computational problems: accurately and efficiently solving equations when velocities are characterized by widely varied time constants, finding steady state solutions, estimating rate constants by fitting numerical solutions to experimental data, and identifying bifurcation points in the multi-dimensional parameter space.A realistic model of the budding yeast cell cycle contains about 30 differential equations with over 100 rate constants (Chen et al. 2004). The parameters are estimated from the cell-cycle behavior of over 100 mutants defective in the regulatory network. Simulating the entire set takes a few minutes on a desktop PC for one ch...
This research anticipates a future where “smart cities” rely extensively on data analytics to determine budget allocations, to manage traffic, to design infrastructure, and to advance sustainability efforts. In this study, Helen Nissenbaum's contextual integrity framework helps us understand how smart city residents consider privacy norms, and provides a structure for comparing these norms to current data privacy practices. The study findings and policy recommendations are based on focus group discussions with more than 80 residents of Long Beach, California, as well as 60 responses to an open-ended question asked in a smart city survey.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.