This paper tries to give a gentle introduction to deep learning in medical image processing, proceeding from theoretical foundations to applications. We first discuss general reasons for the popularity of deep learning, including several major breakthroughs in computer science. Next, we start reviewing the fundamental basics of the perceptron and neural networks, along with some fundamental theory that is often omitted. Doing so allows us to understand the reasons for the rise of deep learning in many application domains. Obviously medical image processing is one of these areas which has been largely affected by this rapid progress, in particular in image detection and recognition, image segmentation, image registration, and computer-aided diagnosis. There are also recent trends in physical simulation, modelling, and reconstruction that have led to astonishing results. Yet, some of these approaches neglect prior knowledge and hence bear the risk of producing implausible results. These apparent weaknesses highlight current limitations of deep learning. However, we also briefly discuss promising approaches that might be able to resolve these problems in the future.
Fluorescence tomography of diffuse media can yield optimal three-dimensional imaging when multiple projections over 360 degrees geometries are captured, compared with limited projection angle systems such as implementations in the slab geometry. We demonstrate how it is possible to perform noncontact, 360 degrees projection fluorescence tomography of mice using CCD-camera-based detection in free space, i.e., in the absence of matching fluids. This approach achieves high spatial sampling of photons propagating through tissue and yields a superior information content data set compared with fiber-based 360 degrees implementations. Reconstruction feasibility using 36 projections in 10 degrees steps is demonstrated in mice.
In this preliminary study, we could demonstrate that 3-D localization of SLNs is feasible using freehand SPECT technology. Prerequisites for acquisition of a good scan quality, most likely allowing precise SLN mapping, have been defined. This approach has high potential to allow image-guided biopsy and further standardization of SLN dissection, thus bringing 3-D nuclear imaging into the operating room.
Findings confirm the safety of gadopentetate dimeglumine.
Here we introduce a new concept for x-ray computed tomography that yields information about the local micro-morphology and its orientation in each voxel of the reconstructed 3D tomogram. Contrary to conventional x-ray CT, which only reconstructs a single scalar value for each point in the 3D image, our approach provides a full scattering tensor with multiple independent structural parameters in each volume element. In the application example shown in this study, we highlight that our method can visualize sub-pixel fiber orientations in a carbon composite sample, hence demonstrating its value for non-destructive testing applications. Moreover, as the method is based on the use of a conventional x-ray tube, we believe that it will also have a great impact in the wider range of material science investigations and in future medical diagnostics.
The sampling patterns of the light field microscope (LFM) are highly depth-dependent, which implies non-uniform recoverable lateral resolution across depth. Moreover, reconstructions using state-of-the-art approaches suffer from strong artifacts at axial ranges, where the LFM samples the light field at a coarse rate. In this work, we analyze the sampling patterns of the LFM, and introduce a flexible light field point spread function model (LFPSF) to cope with arbitrary LFM designs. We then propose a novel aliasing-aware deconvolution scheme to address the sampling artifacts. We demonstrate the high potential of the proposed method on real experimental data.
X-ray computed tomography (CT) is one of the most commonly used three-dimensional medical imaging modalities today. It has been refined over several decades, with the most recent innovations including dual-energy and spectral photon-counting technologies. Nevertheless, it has been discovered that wave-optical contrast mechanisms—beyond the presently used X-ray attenuation—offer the potential of complementary information, particularly on otherwise unresolved tissue microstructure. One such approach is dark-field imaging, which has recently been introduced and already demonstrated significantly improved radiological benefit in small-animal models, especially for lung diseases. Until now, however, dark-field CT could not yet be translated to the human scale and has been restricted to benchtop and small-animal systems, with scan durations of several minutes or more. This is mainly because the adaption and upscaling to the mechanical complexity, speed, and size of a human CT scanner so far remained an unsolved challenge. Here, we now report the successful integration of a Talbot–Lau interferometer into a clinical CT gantry and present dark-field CT results of a human-sized anthropomorphic body phantom, reconstructed from a single rotation scan performed in 1 s. Moreover, we present our key hardware and software solutions to the previously unsolved roadblocks, which so far have kept dark-field CT from being translated from the optical bench into a rapidly rotating CT gantry, with all its associated challenges like vibrations, continuous rotation, and large field of view. This development enables clinical dark-field CT studies with human patients in the near future.
Quite recently, a method has been presented to reconstruct X-ray scattering tensors from projections obtained in a grating interferometry setup. The original publications present a rather specialised approach, for instance by suggesting a single SART-based solver. In this work, we propose a novel approach to solving the inverse problem, allowing the use of other algorithms than SART (like conjugate gradient), a faster tensor recovery, and an intuitive visualisation. Furthermore, we introduce constraint enforcement for X-ray tensor tomography (cXTT) and demonstrate that this yields visually smoother results in comparison to the state-of-art approach, similar to regularisation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.