Purpose Computed tomography (CT) and, in particular, cone beam CT (CBCT) have been increasingly used as a diagnostic tool in recent years. Patient motion during acquisition is common in CBCT due to long scan times. This results in degraded image quality and may potentially increase the number of retakes. Our aim was to develop a marker‐free iterative motion correction algorithm that works on the projection images and is suitable for local tomography. Methods We present an iterative motion correction algorithm that allows the patient's motion to be detected and taken into account during reconstruction. The core of our method is a fast GPU‐accelerated three‐dimensional reconstruction algorithm. Assuming rigid motion, motion correction is performed by minimizing a pixel‐wise cost function between all captured x‐ray images and parameterized projections of the reconstructed volume. Results Our method is marker‐free and requires only projection images. Furthermore, it can deal with local tomography data. We demonstrate the effectiveness of our approach on both simulated and real motion‐beset patient images. The results show that our new motion correction algorithm leads to accurate reconstructions with sharper edges, better contrasts and more detail. Conclusions The presented method allows for correction of patient motion with observable improvements in image quality compared to uncorrected reconstructions. Potentially, this may reduce the number of retakes caused by corrupted reconstructions due to patient movements.
Modification mapping from cDNA data has become a tremendously important approach in epitranscriptomics. So-called reverse transcription signatures in cDNA contain information on the position and nature of their causative RNA modifications. Data mining of, e.g. Illumina-based high-throughput sequencing data, is therefore fast growing in importance, and the field is still lacking effective tools. Here we present a versatile user-friendly graphical workflow system for modification calling based on machine learning. The workflow commences with a principal module for trimming, mapping, and postprocessing. The latter includes a quantification of mismatch and arrest rates with single-nucleotide resolution across the mapped transcriptome. Further downstream modules include tools for visualization, machine learning, and modification calling. From the machine-learning module, quality assessment parameters are provided to gauge the suitability of the initial dataset for effective machine learning and modification calling. This output is useful to improve the experimental parameters for library preparation and sequencing. In summary, the automation of the bioinformatics workflow allows a faster turnaround of the optimization cycles in modification calling.
Background: Obtaining data from single-cell transcriptomic sequencing allows for the investigation of cell-specific gene expression patterns, which could not be addressed a few years ago. With the advancement of droplet-based protocols the number of studied cells continues to increase rapidly. This establishes the need for software tools for efficient processing of the produced large-scale datasets. We address this need by presenting RainDrop for fast gene-cell count matrix computation from single-cell RNA-seq data produced by 10x Genomics Chromium technology. Results: RainDrop can process single-cell transcriptomic datasets consisting of 784 million reads sequenced from around 8.000 cells in less than 40 minutes on a standard workstation. It significantly outperforms the established Cell Ranger pipeline and the recently introduced Alevin tool in terms of runtime by a maximal (average) speedup of 30.4 (22.6) and 3.5 (2.4), respectively, while keeping high agreements of the generated results. Conclusions: RainDrop is a software tool for highly efficient processing of large-scale droplet-based single-cell RNA-seq datasets on standard workstations written in C++. It is available at https://gitlab.rlp.net/stnieble/raindrop.
Abstract. Automatic determination of fronts from atmospheric data is an important task for weather prediction as well as for research of synoptic-scale phenomena. In this paper we introduce a deep neural network to detect and classify fronts from multi-level ERA5 reanalysis data. Model training and prediction is evaluated using two different regions covering Europe and North America with data from two weather services. We apply label deformation within our loss function, which removes the need for skeleton operations or other complicated post-processing steps as used in other work, to create the final output. We obtain good prediction scores with a critical success index higher than 66.9 % and an object detection rate of more than 77.3 %. Frontal climatologies of our network are highly correlated (greater than 77.2 %) to climatologies created from weather service data. Comparison with a well-established baseline method based on thermodynamic criteria shows a better performance of our network classification. Evaluated cross sections further show that the surface front data of the weather services as well as our network classification are physically plausible. Finally, we investigate the link between fronts and extreme precipitation events to showcase possible applications of the proposed method. This demonstrates the usefulness of our new method for scientific investigations.
Abstract. Automatic determination of fronts from atmospheric data is an important task for weather prediction. In this paper we introduce a deep neural network to detect and classify fronts from multi-level ERA5 reanalysis data. Model training and prediction is evaluated using two different regions covering Europe and North America. We apply label deformation within our loss function which removes the need for skeleton operations or other complicated post processing steps as observed in other work, to create the final output. We observe good prediction scores with CSI higher than 62.9 % and a Object Detection Rate of more than 73 %. Frontal climatologies of our network are highly correlated (greater than 79.6 %) to climatologies created from weather service data. Evaluated cross sections further show that our networks classification is physical plausible. Comparison with a well-established baseline method (ETH Zurich) shows a better performance of our network classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.