Tissue:plasma partition coefficients are key parameters in physiologically based pharmacokinetic (PBPK) models, yet the coefficients are challenging to measure in vivo. Several mechanistic-based equations have been developed to predict partition coefficients using tissue composition information and the compound's physicochemical properties, but it is not clear which, if any, of the methods is most appropriate under given circumstances. Complicating the evaluation, each prediction method was developed, and is typically employed, using a different set of tissue composition information, thereby making a controlled comparison impossible. This study proposed a standardized tissue composition for humans that can be used as a common input for each of the five frequently used prediction methods. These methods were implemented in R and were used to predict partition coefficients for 11 drugs, classified as strong bases, weak bases, acids, neutrals, and zwitterions. PBPK models developed in R (mrgsolve) for each drug and each set of partition coefficient predictions were compared with respective observed plasma concentration data. Percent root mean square error and half-life percent error were used to evaluate the accuracy of the PBPK model predictions using each partition coefficient method as summarized by strong bases, weak bases, acids, neutrals, and zwitterions characterization. The analysis indicated that no partition coefficient method consistently yielded the most accurate PBPK model predictions. As such, PBPK model predictions using all partition coefficient methods should be considered during drug development. SIGNIFICANCE STATEMENT Several mechanistic-based methods exist to predict tissue:plasma partition coefficients critical to PBPK modeling. Controlled comparisons are confounded by the use of different tissue composition values for each method; a standardized tissue composition was proposed. Resulting assessments indicated that no method was consistently superior; therefore, sensitivity of PBPK predictions to each method may be warranted prior to model optimization.
Abstract. Recent years have seen the emergence of a number of AOP languages. While these can mostly be characterized as logic-oriented languages that map situations to courses of action, they are based on a variety of concepts, resulting in obvious differences in syntax and semantics. Less obviously, the development tools and infrastructure -such as environment integration, reuse mechanisms, debugging, and IDE integration -surrounding these languages also vary widely. Two drawbacks of this diversity are: a perceived lack of transferability of knowledge and expertise between languages; and a potential obscuring of the fundamental conceptual differences between languages. These drawbacks can impact on both the languages' uptake and comparability. In this paper, we present a Common Language Framework that has emerged out of ongoing work on AOP languages that have been deployed through Agent Factory. This framework consists of a set of pre-written components for building agent interpreters, together with a set of tools that can be easily adapted to different AOP languages. Through this framework we have been able to rapidly prototype a range of different AOP languages, one of which is presented as a case study in this paper.
Abstract. Agent-Oriented Programming (AOP) researchers have successfully developed a range of agent programming languages that bridge the gap between theory and practice. Unfortunately, despite the incommunity success of these languages, they have proven less compelling to the wider software engineering community. One of the main problems facing AOP language developers is the need to bridge the cognitive gap that exists between the concepts underpinning mainstream languages and those underpinning AOP. In this paper, we attempt to build such a bridge through a conceptual mapping that we subsequently use to drive the design of a new programming language entitled ASTRA, which has been evaluated by a group of experienced software engineers attending an Agent-Oriented Software Engineering Masters course.
Publication informationAnnals of Mathematics and Artificial Intelligence, 61 (4) Abstract. This is the third year in which a team from University College Dublin has participated in the Multi Agent Contest 1 . This paper describes the system that was created to participate in the contest, along with observations of the team's experiences in the contest. The system itself was built using the AF-TeleoReactive and AF-AgentSpeak agent programming languages running on the Agent Factory platform. A hybrid control architecture inspired by the SoSAA strategy aided in the separation of concerns between low-level behaviours (such as movement and obstacle evasion) and higher-level planning and strategy.
Historically, computing instructors and researchers have developed a wide variety of tools to support teaching and educational research, including exam and code testing suites and data collection solutions. Many are then community or individually maintained. However, these tools often find limited adoption beyond their creators. As a result, it is common for many of the same functionalities to be re-implemented by different instructional groups within the CS Education community. We hypothesize that this is due in part to accessibility, discoverability, and adaptability challenges, among others. Further, instructors often face institutional barriers to deployment, which can include hesitance of institutions to utilize community developed solutions that often lack a centralized authority.This working group will explore what solutions are currently available, what instructors need, and reasons behind the abovementioned phenomenon. This will be accomplished via a literature review and survey to identify the tools that have been developed by the community; the solutions that are currently available and in use by instructors; what features are needed moving forward for classroom and research use; what support for extensions is * Working group leader
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.