Preimplantation genetic screening (PGS) is widely used to select in vitro-fertilized embryos free of chromosomal abnormalities and to improve the clinical outcome of in vitro fertilization (IVF). A disadvantage of PGS is that it requires biopsy of the preimplantation human embryo, which can limit the clinical applicability of PGS due to the invasiveness and complexity of the process. Here, we present and validate a noninvasive chromosome screening (NICS) method based on sequencing the genomic DNA secreted into the culture medium from the human blastocyst. By using multiple annealing and looping-based amplification cycles (MALBAC) for whole-genome amplification (WGA), we performed next-generation sequencing (NGS) on the spent culture medium used to culture human blastocysts (n = 42) and obtained the ploidy information of all 24 chromosomes. We validated these results by comparing each with their corresponding whole donated embryo and obtained a high correlation for identification of chromosomal abnormalities (sensitivity, 0.882, and specificity, 0.840). With this validated NICS method, we performed chromosome screening on IVF embryos from seven couples with balanced translocation, azoospermia, or recurrent pregnancy loss. Six of them achieved successful clinical pregnancies, and five have already achieved healthy live births thus far. The NICS method avoids the need for embryo biopsy and therefore substantially increases the safety of its use. The method has the potential of much wider chromosome screening applicability in clinical IVF, due to its high accuracy and noninvasiveness.
Graphics processors (GPUs) have recently emerged as powerful coprocessors for general purpose computation. Compared with commodity CPUs, GPUs have an order of magnitude higher computation power as well as memory bandwidth. Moreover, new-generation GPUs allow writes to random memory locations, provide efficient interprocessor communication through on-chip local memory, and support a general purpose parallel programming model. Nevertheless, many of the GPU features are specialized for graphics processing, including the massively multithreaded architecture, the Single-Instruction-Multiple-Data processing style, and the execution model of a single application at a time. Additionally, GPUs rely on a bus of limited bandwidth to transfer data to and from the CPU, do not allow dynamic memory allocation from GPU kernels, and have little hardware support for write conflicts. Therefore, a careful design and implementation is required to utilize the GPU for coprocessing database queries.In this article, we present our design, implementation, and evaluation of an in-memory relational query coprocessing system, GDB, on the GPU. Taking advantage of the GPU hardware features, we design a set of highly optimized data-parallel primitives such as split and sort, and use these primitives to implement common relational query processing algorithms. Our algorithms The work of Ke Yang was done while he was visiting HKUST, and the work of Bingsheng He and Rui Fang was done when they were students at HKUST. • B. He et al.utilize the high parallelism as well as the high memory bandwidth of the GPU, and use parallel computation and memory optimizations to effectively reduce memory stalls. Furthermore, we propose coprocessing techniques that take into account both the computation resources and the GPU-CPU data transfer cost so that each operator in a query can utilize suitable processors-the CPU, the GPU, or both-for an optimized overall performance. We have evaluated our GDB system on a machine with an Intel quad-core CPU and an NVIDIA GeForce 8800 GTX GPU. Our workloads include microbenchmark queries on memory-resident data as well as TPC-H queries that involve complex data types and multiple query operators on data sets larger than the GPU memory. Our results show that our GPU-based algorithms are 2-27x faster than their optimized CPU-based counterparts on in-memory data. Moreover, the performance of our coprocessing scheme is similar to, or better than, both the GPU-only and the CPU-only schemes.
We present our novel design and implementation of relational join algorithms for new-generation graphics processing units (GPUs). The new features of such GPUs include support for writes to random memory locations, efficient inter-processor communication through fast shared memory, and a programming model for general-purpose computing. Taking advantage of these new features, we design a set of data-parallel primitives such as scan, scatter and split, and use these primitives to implement indexed or non-indexed nested-loop, sort-merge and hash joins. Our algorithms utilize the high parallelism as well as the high memory bandwidth of the GPU and use parallel computation to effectively hide the memory latency. We have implemented our algorithms on a PC with an NVIDIA G80 GPU and an Intel P4 dual-core CPU. Our GPU-based algorithms are able to achieve 2-20 times higher performance than their CPU-based counterparts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.