As scanners produce higher-resolution and more densely sampled images, this raises the challenge of data storage, transmission and communication within healthcare systems. Since the quality of medical images plays a crucial role in diagnosis accuracy, medical imaging compression techniques are desired to reduce scan bitrate while guaranteeing lossless reconstruction. This paper presents a lossless compression method that integrates a Recurrent Neural Network (RNN) as a 3D sequence prediction model. The aim is to learn the long dependencies of the voxel's neighbourhood in 3D using Long Short-Term Memory (LSTM) network then compress the residual error using arithmetic coding. Experiential results reveal that our method obtains a higher compression ratio achieving 15% saving compared to the state-of-the-art lossless compression standards, including JPEG-LS, JPEG2000, JP3D, HEVC, and PPMd. Our evaluation demonstrates that the proposed method generalizes well to unseen modalities CT and MRI for the lossless compression scheme. To the best of our knowledge, this is the first lossless compression method that uses LSTM neural network for 16-bit volumetric medical image compression.
Data compression forms a central role in handling the bottleneck of data storage, transmission and processing. Lossless compression requires reducing the file size whilst maintaining bit-perfect decompression, which is the main target in medical applications. This paper presents a novel lossless compression method for 16-bit medical imaging volumes. The aim is to train a neural network (NN) as a 3D data predictor, which minimizes the differences with the original data values and to compress those residuals using arithmetic coding. We evaluate the compression performance of our proposed models to state-of-the-art lossless compression methods, which shows that our approach accomplishes a higher compression ratio in comparison to JPEG-LS, JPEG2000, JP3D, and HEVC and generalizes well.
Evaluating image quality in Monte Carlo rendered images is an important aspect of the rendering process as we often need to determine the relative quality between images computed using different algorithms and with varying amounts of computation. The use of a gold-standard, reference image, or ground truth is a common method to provide a baseline with which to compare experimental results. We show that if not chosen carefully, the quality of reference images used for image quality assessment can skew results leading to significant misreporting of error. We present an analysis of error in Monte Carlo rendered images and discuss practices to avoid or be aware of when designing an experiment.
This work presents the Opt-ID software developed by the Rosalind Franklin Institute (RFI) and Diamond Light Source (DLS) in collaboration with Helmholtz-Zentrum Berlin (HZB). Opt-ID allows for efficient simulation of synchrotron Insertion Devices (ID) and the B fields produced by a given arrangement of candidate magnets. It provides an optimization framework built on the Artificial Immune System (AIS) algorithm for swapping and adjusting magnets within an ID to observe how these changes would affect the magnetic field of a real-world device, guiding ID builders in the steps they should take during ID tuning. Code for Opt-ID is provided open-source under the Apache-2.0 License on Github: https://github.com/rosalindfranklininstitute/Opt-ID
In this paper, we introduce a framework for implementing deep copy on top of MPI. The process is initiated by passing just the root object of the dynamic data structure. Our framework takes care of all pointer traversal, communication, copying and reconstruction on receiving nodes. The benefit of our approach is that MPI users can deep copy complex dynamic data structures without the need to write bespoke communication or serialize/deserialize methods for each object. These methods can present a challenging implementation problem that can quickly become unwieldy to maintain when working with complex structured data. This paper demonstrates our generic implementation, which encapsulates both approaches. We analyze the approach with a variety of structures (trees, graphs (including complete graphs) and rings) and demonstrate that it performs comparably to hand written implementations, using a vastly simplified programming interface. We make the source code available completely as a convenient header file.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.