This project was funded by the NIHR Health Technology Assessment programme and the Wellcome Trust and will be published in full in Health Technology Assessment; Vol. 18, No. 43. See the NIHR Journals Library website for further project information.
What to prescribe for a patient in general practice when the choice of treatments has a limited evidence base? Tjeerd-Pieter van Staa and colleagues argue that using electronic health records to enter patients into randomised trials of treatments in real time could provide the answer Tjeerd-Pieter van Staa head of research and honorary professor of epidemiology
PURPOSEThe learning health care system refers to the cycle of turning health care data into knowledge, translating that knowledge into practice, and creating new data by means of advanced information technology. The electronic Primary Care Research Network (ePCRN) was a project, funded by the US National Institutes of Health, with the aim to facilitate clinical research using primary care electronic health records (EHRs).
METHODSWe identifi ed the requirements necessary to deliver clinical studies via a distributed electronic network linked to EHRs. After we explored a variety of informatics solutions, we constructed a functional prototype of the software. We then explored the barriers to adoption of the prototype software within US practice-based research networks.
RESULTSWe developed a system to assist in the identifi cation of eligible cohorts from EHR data. To preserve privacy, counts and fl agging were performed remotely, and no data were transferred out of the EHR. A lack of batch export facilities from EHR systems and ambiguities in the coding of clinical data, such as blood pressure, have so far prevented a full-scale deployment. We created an international consortium and a model for sharing further ePCRN development across a variety of ongoing projects in the United States and Europe.CONCLUSIONS A means of accessing health care data for research is not suffi cient in itself to deliver a learning health care system. EHR systems need to use sophisticated tools to capture and preserve rich clinical context in coded data, and business models need to be developed that incentivize all stakeholders from clinicians to vendors to participate in the system.
The model allows analysis of data privacy and confidentiality issues for research with patient data in a structured way and provides a framework to specify a privacy compliant data flow, to communicate privacy requirements and to identify weak points for an adequate implementation of data privacy.
A content-centric network is one which supports host-to-content routing, rather than the host-to-host routing of the existing Internet. This paper investigates the potential of caching data at the router-level in content-centric networks. To achieve this, two measurement sets are combined to gain an understanding of the potential caching benefits of deploying content-centric protocols over the current Internet topology. The first set of measurements is a study of the BitTorrent network, which provides detailed traces of content request patterns. This is then combined with CAIDA's ITDK Internet traces to replay the content requests over a real-world topology. Using this data, simulations are performed to measure how effective content-centric networking would have been if it were available to these consumers/providers. We find that larger cache sizes (10,000 packets) can create significant reductions in packet path lengths. On average, 2.02 hops are saved through caching (a 20% reduction), whilst also allowing 11% of data requests to be maintained within the requester's AS. Importantly, we also show that these benefits extend significantly beyond that of edge caching by allowing transit ASes to also reduce traffic.
Liquid/liquid dispersion in static mixers has been investigated using Lightnin “In‐liner” mixing elements. The average drop size was found to decrease with increasing residence time, gradually approaching an equilibrium size whose magnitude agrees reasonably well with Kolmogoroff's theory for drop rupture in turbulent flows.
The efficiency at which mechanical energy is utilized in the generation of new interfacial area was evaluated as a function of design and operating conditions and was found to be highest when the final drop size is much larger than the achievable equilibrium value.
ObjectiveBiomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method.Materials and methodsWe developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures.ResultsOur unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project.ConclusionsWe present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.