Supplementary data are available at Bioinformatics online.
One of the more interesting ideas for achieving personalized, preventive, and participatory medicine is the concept of a digital twin. A digital twin is a personalized computer model of a patient. So far, digital twins have been constructed using either mechanistic models, which can simulate the trajectory of physiological and biochemical processes in a person, or using machine learning models, which for example can be used to estimate the risk of having a stroke given a cross-section profile at a given timepoint. These two modelling approaches have complementary strengths which can be combined into a hybrid model. However, even though hybrid modelling combining mechanistic modelling and machine learning have been proposed, there are few, if any, real examples of hybrid digital twins available. We now present such a hybrid model for the simulation of ischemic stroke. On the mechanistic side, we develop a new model for blood pressure and integrate this with an existing multi-level and multi-timescale model for the development of type 2 diabetes. This mechanistic model can simulate the evolution of known physiological risk factors (such as weight, diabetes development, and blood pressure) through time, under different intervention scenarios, involving a change in diet, exercise, and certain medications. These forecast trajectories of the physiological risk factors are then used by a machine learning model to calculate the 5-year risk of stroke, which thus also can be calculated for each timepoint in the simulated scenarios. We discuss and illustrate practical issues with clinical implementation, such as data gathering and harmonization. By improving patients' understanding of their body and health, the digital twin can serve as a valuable tool for patient education and as a conversation aid during the clinical encounter. As such, it can facilitate shared decision-making, promote behavior change towards a healthy lifestyle, and improve adherence to prescribed medications.
During the last decades, there have been significant changes in science that have provoked a big increase in the number of articles published every year. This increment implies a new difficulty for scientists, who have to do an extra effort for selecting literature relevant for their activity. In this work, we present a pipeline for the generation of scientific literature knowledge graphs in the agriculture domain. The pipeline combines Semantic Web and natural language processing technologies, which make data understandable by computer agents, empowering the development of final user applications for literature searches. This workflow consists of (1) RDF generation, including metadata and contents; (2) semantic annotation of the content; and (3) property graph population by adding domain knowledge from ontologies, in addition to the previously generated RDF data describing the articles. This pipeline was applied to a set of 127 agriculture articles, generating a knowledge graph implemented in Neo4j, publicly available on Docker. The potential of our model is illustrated through a series of queries and use cases, which not only include queries about authors or references but also deal with article similarity or clustering based on semantic annotation, which is facilitated by the inclusion of domain ontologies in the graph.
The Web of Data aims at linking Internet data repositories. Semantic Web technologies make data easily readable by computer agents, enabling the automation of complex tasks and facilitating data integration. They facilitate the achievement of the Web of Data in which users can query the connected datasets in the search engine style, i.e. by using keywords. However, querying semantic repositories in a friendly way, not requiring the mastering of query languages such as SPARQL, is still a challenging task. In this work, we present Semankey, an approach for the automatic building of SPARQL queries from a list of keywords entered by the user. Semankey identifies semantic entities in the keywords by using a domain ontology to interpret the query meaning and automatically builds a set of queries by connecting the entities through the relationships described in the ontology and by applying query size-based heuristics. The main contributions of Semankey are the use of query filters and the generation of multiple SPARQL queries derived from the different interpretations of the given input, according to the underlying domain ontology. We used the data from the Question Answering over Linked Data challenge for evaluating our approach in different execution modes and for analysing the query trees generated, obtaining a precision of 0.52 and a recall of 0.60 when considering the best answer provided per test case.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.