Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.
Controlled release (CR) dosage forms have been extensively used to improve therapy with several important drugs. However, the development processes are faced with several physiological difficulties such as the inability to restrain and localize the system within the desired region of the gastrointestinal tract and the highly variable nature of the gastric emptying process. This variability may lead to unpredictable bioavailability and times to achieve peak plasma levels. On the other hand, incorporation of the drug in a controlled release gastroretentive dosage forms (CR-GRDF) which can remain in the gastric region for several hours would significantly prolong the gastric residence time of drugs and improve bioavailability, reduce drug waste, and enhance the solubility of drugs that are less soluble in high pH environment. Gastroretention would also facilitate local drug delivery to the stomach and proximal small intestine. Thus, gastroretention could help to provide greater availability of new products and consequently improved therapeutic activity and substantial benefits to patients. Controlled gastric retention of solid dosage form may be achieved by the mechanisms of floatation, mucoadhesion, sedimentation, expansion or by a modified shaped system. The purpose of this paper is to review the recent literature and current technology used in the development of gastroretentive dosage forms.
Abstract-Network contention has a significantly adverse effect on the performance of parallel applications with increasing size of parallel machines. Machines of the petascale era are forcing application developers to map tasks intelligently to job partitions to achieve the best performance possible. This paper presents a framework for automated mapping of parallel applications with regular communication graphs to two and three dimensional mesh and torus networks. This framework will save much effort on the part of application developers to generate mappings for their individual applications.One component of the framework is a process topology analyzer to find regular patterns and if found, to determine the dimensions of the communication graphs of applications. The other component is a suite of heuristic techniques for mapping 2D object grids to 2D and 3D processor meshes. The framework chooses the best heuristic from the suite for a given object grid and processor mesh pair based on the hop-bytes metric. We show performance improvements using the framework, for a 2D Stencil benchmark in MPI and the Weather Research and Forecasting model running on the IBM Blue Gene/P. We also compare our algorithms with others discussed in literature.
Abstract-Network Function Virtualization (NFV) has the potential to significantly reduce the capital and operating expenses, shorten product release cycle, and improve service agility. In this paper, we focus on minimizing the total number of Virtual Network Function (VNF) instances to provide a specific service (possibly at different locations) to all the flows in a network. Certain network security and analytics applications may allow fractional processing of a flow at different nodes (corresponding to datacenters), giving an opportunity for greater optimization of resources. Through a reduction from the set cover problem, we show that this problem is NP-hard and cannot even be approximated within a factor of (1 − o(1)) ln m (where m is the number of flows) unless P=NP. Then, we design two simple greedy algorithms and prove that they achieve an approximation ratio of (1 − o(1)) ln m + 2, which is asymptotically optimal. For special cases where each node hosts multiple VNF instances (which is typically true in practice), we also show that our greedy algorithms have a constant approximation ratio. Further, for tree topologies we develop an optimal greedy algorithm by exploiting the inherent topological structure. Finally, we conduct extensive numerical experiments to evaluate the performance of our proposed algorithms in various scenarios.
In this paper, we study the scheduling problem for downlink transmission in a multi-channel (e.g., OFDM-based) wireless network. We focus on a single cell, with the aim of developing a unifying framework for designing low-complexity scheduling policies that can provide optimal performance in terms of both throughput and delay. We develop new easy-to-verify sufficient conditions for rate-function delay optimality (in the many-channel many-user asymptotic regime) and throughput optimality (in general non-asymptotic setting), respectively. The sufficient conditions allow us to prove rate-function delay optimality for a class of Oldest Packets First (OPF) policies and throughput optimality for a large class of Maximum Weight in the Fluid limit (MWF) policies, respectively. By exploiting the special features of our carefully chosen sufficient conditions and intelligently combining policies from the classes of OPF and MWF policies, we design hybrid policies that are both rate-function delay-optimal and throughput-optimal with a complexity of O(n 2.5 log n), where n is the number of channels or users.Our sufficient condition is also used to show that a previously proposed policy called Delay Weighted Matching (DWM) is rate-function delay-optimal. However, DWM incurs a high complexity of O(n 5 ).Thus, our approach yields significantly lower complexity than the only previously designed delay and throughput optimal scheduling policy. We also conduct numerical experiments to validate our theoretical results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.