Purpose Complementary information obtained from multiple contrasts of tissue facilitates physicians assessing, diagnosing and planning treatment of a variety of diseases. However, acquiring multiple contrasts magnetic resonance images (MRI) for every patient using multiple pulse sequences is time‐consuming and expensive, where, medical image synthesis has been demonstrated as an effective alternative. The purpose of this study is to develop a unified framework for multimodal MR image synthesis. Methods A unified generative adversarial network consisting of only a single generator and a single discriminator was developed to learn the mappings among images of four different modalities. The generator took an image and its modality label as inputs and learned to synthesize the image in the target modality, while the discriminator was trained to distinguish between real and synthesized images and classify them to their corresponding modalities. The network was trained and tested using multimodal brain MRI consisting of four different contrasts which are T1‐weighted (T1), T1‐weighted and contrast‐enhanced (T1c), T2‐weighted (T2), and fluid‐attenuated inversion recovery (Flair). Quantitative assessments of our proposed method were made through computing normalized mean absolute error (NMAE), peak signal‐to‐noise ratio (PSNR), structural similarity index measurement (SSIM), visual information fidelity (VIF), and naturalness image quality evaluator (NIQE). Results The proposed model was trained and tested on a cohort of 274 glioma patients with well‐aligned multi‐types of MRI scans. After the model was trained, tests were conducted by using each of T1, T1c, T2, Flair as a single input modality to generate its respective rest modalities. Our proposed method shows high accuracy and robustness for image synthesis with arbitrary MRI modality that is available in the database as input. For example, with T1 as input modality, the NMAEs for the generated T1c, T2, Flair respectively are 0.034 ± 0.005, 0.041 ± 0.006, and 0.041 ± 0.006, the PSNRs respectively are 32.353 ± 2.525 dB, 30.016 ± 2.577 dB, and 29.091 ± 2.795 dB, the SSIMs are 0.974 ± 0.059, 0.969 ± 0.059, and 0.959 ± 0.059, the VIF are 0.750 ± 0.087, 0.706 ± 0.097, and 0.654 ± 0.062, and NIQE are 1.396 ± 0.401, 1.511 ± 0.460, and 1.259 ± 0.358, respectively. Conclusions We proposed a novel multimodal MR image synthesis method based on a unified generative adversarial network. The network takes an image and its modality label as inputs and synthesizes multimodal images in a single forward pass. The results demonstrate that the proposed method is able to accurately synthesize multimodal MR images from a single MR image.
Understanding the relationship between brain function and behavior remains a major challenge in neuroscience. Photoacoustic tomography (PAT) is an emerging technique that allows for noninvasive in vivo brain imaging at micrometer-millisecond spatiotemporal resolution. In this article, a novel, miniaturized 3D wearable PAT (3D-wPAT) technique is described for brain imaging in behaving rats. 3D-wPAT has three layers of fully functional acoustic transducer arrays. Phantom imaging experiments revealed that the in-plane X-Y spatial resolutions were ~200 μm for each acoustic detection layer. The functional imaging capacity of 3D-wPAT was demonstrated by mapping the cerebral oxygen saturation via multi-wavelength irradiation in behaving hyperoxic rats. In addition, we demonstrated that 3D-wPAT could be used for monitoring sensory stimulus-evoked responses in behaving rats by measuring hemodynamic responses in the primary visual cortex during visual stimulation. Together, these results show the potential of 3D-wPAT for brain study in behaving rodents.Imaging in behaving animals is becoming an important tool in behavioral neuroscience and preclinical brain disease therapy studies. It offers scientists the opportunity to correlate brain function with voluntary sensorimotor responses, which is not feasible in traditional anesthetized or head-fixed preparations 1,2 . In-vivo micro-electrode arrays are capable of recording neuronal action potentials at high speed in behaving rats 3,4 , but it is invasive and often restricted to limited number of recording electrodes 5 . Conversely, functional magnetic resonance imaging (fMRI) 6 , positron emission tomography (PET) 7 , and diffuse optical tomography (DOT) 8 offer 3-D noninvasive recording of whole brain metabolic/hemodynamic responses in awake/behaving animals, but their spatial and/or temporal resolution is relatively low (millimeters for fMRI, PET and DOT; seconds for fMRI and PET). Optical microscopic imaging techniques, such as two-photon microscopy (TPM) 9 , can provide high spatial (micrometers) and temporal resolution (milliseconds), and have been used to examine brain function-behavior in behaving animals in proof-of-principle studies 10,11 . Although less invasive than micro-electrode methods, it still runs a risk of damaging the brain vasculature or neurons, thereby affecting its efficacy in chronic studies. In addition, the imaging region in TPM is generally limited to several square millimeters (1-4 mm 2 ). Moreover, the high optical scattering in biological tissues restricts the imaging depth to < 1 mm. Therefore, an imaging technique with high spatiotemporal resolution that can noninvasively image neural function across the brain, including substantial depth along the dorsoventral axis, in a behaving animal is in high demand.Photoacoustic imaging is an emerging technique that has been widely used in biomedical applications 12,13 . In principle, absorption contrasts within the tissue are acoustically detected via the photoacoustic effect in which initial acoustic...
A single miniature endoscope capable of concurrently probing multiple contrast mechanisms of tissue in high resolution is highly attractive, as it makes it possible for providing complementary, more complete tissue information on internal organs hard to access. Here we describe such a miniature endoscope only 1 mm in diameter that integrates photoacoustic imaging (PAI), optical coherence tomography (OCT), and ultrasound (US). The integration of PAI/OCT/US allows for high-resolution imaging of three tissue contrasts including optical absorption (PAI), optical scattering (OCT), and acoustic properties (US). We demonstrate the capabilities of this trimodal endoscope using mouse ear, human hand, and human arteries with atherosclerotic plaques. This 1-mm-diameter trimodal endoscope has the potential to be used for imaging of internal organs such as arteries, GI tracts, esophagus, and prostate in both humans and animals.
In this Letter, we present a photoacoustic imaging (PAI) system based on a low-cost high-power miniature light emitting diode (LED) that is capable of in vivo mapping vasculature networks in biological tissue. Overdriving with 200 ns pulses and operating at a repetition rate of 40 kHz, a 1.2 W 405 nm LED with a radiation area of 1000 μm×1000 μm and a size of 3.5 mm×3.5 mm was used to excite photoacoustic signals in tissue. Phantoms including black stripes, lead, and hair were used to validate the system in which a volumetric PAI image was obtained by scanning the transducer and the light beam in a two-dimensional x-y plane over the object. In vivo imaging of the vasculature of a mouse ear shows that LED-based PAI could have great potential for label-free biomedical imaging applications where the use of bulky and expensive pulsed lasers is impractical.
Purpose: Auto-segmentation algorithms offer a potential solution to eliminate the labor-intensive, time-consuming, and observer-dependent manual delineation of organs-at-risk (OARs) in radiotherapy treatment planning. This study aimed to develop a deep learning-based automated OAR delineation method to tackle the current challenges remaining in achieving reliable expert performance with the state-of-the-art auto-delineation algorithms. Methods:The accuracy of OAR delineation is expected to be improved by utilizing the complementary contrasts provided by computed tomography (CT) (bony-structure contrast) and magnetic resonance imaging (MRI) (soft-tissue contrast). Given CT images, synthetic MR images were firstly generated by a pre-trained cycle-consistent generative adversarial network. The features of CT and synthetic MRI were then extracted and combined for the final delineation of organs using mask scoring regional convolutional neural network.Both in-house and public datasets containing CT scans from head-and-neck (HN) cancer patients were adopted to quantitatively evaluate the performance of the proposed method against current state-of-the-art algorithms in metrics including Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS).Results: Across all of 18 OARs in our in-house dataset, the proposed method achieved an average DSC, HD95, MSD, and RMS of 0.77 (0.58-0.90), 2.90 mm (1.32-7.63 mm), 0.89 mm (0.42-1.85 mm), and 1.44 mm (0.71-3.15 mm), respectively, outperforming the current state-of-the-art algorithms by 6%, 16%, 25%, and 36%, respectively. On public datasets, for all nine OARs, an average DSC of 0.86 (0.73-0.97) were achieved, 6% better than the competing methods. Conclusion:We demonstrated the feasibility of a synthetic MRI-aided deep learning framework for automated delineation of OARs in HN radiotherapy
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.