As indicated in Chapter 3, there are a large number of potential sources of data now available for modelling purposes. These range from historical literature references for a few compounds to highly curated databases of hundreds of thousands of compounds, available via the internet. Before including any data in an in silico model, the question of data quality must be addressed. Although it is difficult to define the quality of data in absolute terms, it is possible to assess the suitability of data for a given purpose. There are many reasons for variability within data and the degree of error that is acceptable for one model may not be the same as for another. For example generating a global model intended to pre-screen large numbers of compounds does not require the same degree of accuracy as performing an individual risk assessment for a chemical of interest. In this chapter, sources of data variability and error will be discussed and formal methods to score data quality, such as use of the Klimisch criteria, will be described. Examples of data quality issues will be given for specific endpoints relating to both environmental and human health effects. Mathematical approaches (Dempster-Schafer theory and Bayesian networks) demonstrating how this information relating to confidence in the data can be incorporated into in silico models is also discussed.
This article provides a set of general conditions to identify efficient sequential testing strategies when test information is uncertain. We first survey the Bayesian Value-of-Information (VOI) approach to test selection. Second, we extend the approach to study sequential testing systems as applied in toxicology, but also relevant in other domains. We show how the order of tests in the sequence and the stopping rule depend on prior beliefs, the diagnostic performance of tests, and testing costs. We illustrate our findings with an example from short-term genotoxicity testing and discuss implications for developing optimized sequential testing strategies for risk management of chemicals.
Societies worldwide are investing considerable resources into the safe development and use of nanomaterials. Although each of these protective efforts is crucial for governing the risks of nanomaterials, they are insufficient in isolation. What is missing is a more integrative governance approach that goes beyond legislation. Development of this approach must be evidence based and involve key stakeholders to ensure acceptance by end users. The challenge is to develop a framework that coordinates the variety of actors involved in nanotechnology and civil society to facilitate consideration of the complex issues that occur in this rapidly evolving research and development area. Here, we propose three sets of essential elements required to generate an effective risk governance framework for nanomaterials. (1) Advanced tools to facilitate risk-based decision making, including an assessment of the needs of users regarding risk assessment, mitigation, and transfer. (2) An integrated model of predicted human behavior and decision making concerning nanomaterial risks. (3) Legal and other (nano-specific and general) regulatory requirements to ensure compliance and to stimulate proactive approaches to safety. The implementation of such an approach should facilitate and motivate good practice for the various stakeholders to allow the safe and sustainable future development of nanotechnology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.