Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared with current MR imaging-based AC approaches. RSNA, 2017 Online supplemental material is available for this article.
Purpose: To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. Methods: A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. Results: The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. Conclusion: The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging.
PurposeTo investigate direct imaging of trabecular bone using a 3D adiabatic inversion recovery prepared ultrashort TE cones (3D IR‐UTE‐Cones) sequence.MethodsThe proposed 3D IR‐UTE‐Cones sequence used a broadband adiabatic inversion pulse together with a short TR/TI combination to suppress signals from long T2 tissues such as muscle and marrow fat, followed by multispoke UTE acquisition to detect signal from short T2 water components in trabecular bone. The feasibility of this technique for robust suppression of long T2 tissues was first demonstrated through numerical simulations. The proposed IR‐UTE‐Cones sequence was applied to a hip agarose bone phantom and to 6 healthy volunteers for morphologic and quantitative and proton density mapping of trabecular bone.ResultsNumeric simulation suggests that the IR technique with a short TR/TI combination provides sufficient suppression of long T2 tissues with a wide range of T1s. High contrast imaging of trabecular bone can be achieved ex vivo and in vivo, with fitted values of 0.3–0.45 ms and proton densities of 5–9 mol/L.ConclusionThe 3D IR‐UTE‐Cones sequence with a short TR/TI combination provides robust suppression of long T2 tissues and allows both selective imaging and quantitative ( and proton density) assessment of short T2 water components in trabecular bone in vivo.
The proposed MRAC method utilizing deep learning with transfer learning and an efficient dRHE acquisition enables reliable PET quantitation with accurate and rapid pseudo CT generation.
Purpose:To investigate tricomponent analysis of human cortical bone using a multipeak fat signal model with 3D ultrashort TE Cones sequences on a clinical 3T scanner. Methods: Tricomponent fitting of bound water, pore water, and fat content using a multipeak fat spectra model was proposed for 3D ultrashort TE imaging of cortical bone. Three-dimensional ultrashort TE Cones acquisitions combined with tricomponent analysis were used to investigate bound and pore water T * 2 and fractions, as well as fat T * 2 and fraction in cortical bone. Feasibility studies were performed on 9 human cortical bone specimens with regions of interest selected from the endosteum to the periosteum in 4 circumferential regions. Microcomputed tomography studies were performed to measure bone porosity and bone mineral density for comparison and validation of the bound and pore water analyses. Results: The oscillation of the signal decay was well-fitted with the proposed tricomponent model. The sum of the pore water and fat fractions from tricomponent analysis showed a high correlation with microcomputed tomography porosity (R = 0.74, P < 0.01). Estimated bound-water fraction also demonstrated a high correlation with bone mineral density (R = 0.70, P < 0.01). Conclusion: Tricomponent analysis significantly improves the estimation of boundwater and pore-water fractions in human cortical bone. K E Y W O R D Sbound water, multipeak fat spectral model, pore water, T * 2 , UTE
BackgroundTo develop and evaluate the feasibility of a data-driven deep learning approach (deepAC) for positron-emission tomography (PET) image attenuation correction without anatomical imaging. A PET attenuation correction pipeline was developed utilizing deep learning to generate continuously valued pseudo-computed tomography (CT) images from uncorrected 18F-fluorodeoxyglucose (18F-FDG) PET images. A deep convolutional encoder-decoder network was trained to identify tissue contrast in volumetric uncorrected PET images co-registered to CT data. A set of 100 retrospective 3D FDG PET head images was used to train the model. The model was evaluated in another 28 patients by comparing the generated pseudo-CT to the acquired CT using Dice coefficient and mean absolute error (MAE) and finally by comparing reconstructed PET images using the pseudo-CT and acquired CT for attenuation correction. Paired-sample t tests were used for statistical analysis to compare PET reconstruction error using deepAC with CT-based attenuation correction.ResultsdeepAC produced pseudo-CTs with Dice coefficients of 0.80 ± 0.02 for air, 0.94 ± 0.01 for soft tissue, and 0.75 ± 0.03 for bone and MAE of 111 ± 16 HU relative to the PET/CT dataset. deepAC provides quantitatively accurate 18F-FDG PET results with average errors of less than 1% in most brain regions.ConclusionsWe have developed an automated approach (deepAC) that allows generation of a continuously valued pseudo-CT from a single 18F-FDG non-attenuation-corrected (NAC) PET image and evaluated it in PET/CT brain imaging.
Background: Signal contamination from long T2 water is a major challenge in direct imaging of myelin with MRI. Nulling of the unwanted long T2 signals can be achieved with an inversion recovery (IR) preparation pulse to null long T2 white matter within the brain. The remaining ultrashort T2 signal from myelin can be detected with an ultrashort echo time (UTE) sequence.Purpose: To develop patient-specific whole-brain myelin imaging with a three-dimensional double-echo sliding inversion recovery (DESIRE) UTE sequence. Materials and Methods:The DESIRE UTE sequence generates a series of IR images with different inversion times during a single scan. The optimal inversion time for nulling long T2 signal is determined by finding minimal signal on the second echo. Myelin images are generated by subtracting the second echo image from the first UTE image. To validate this method, a prospective study was performed in phantoms, cadaveric brain specimens, healthy volunteers, and patients with multiple sclerosis (MS). A total of 20 healthy volunteers (mean age, 40 years 6 13 [standard deviation], 10 women) and 20 patients with MS (mean age, 58 years 6 8; 15 women) who underwent MRI between November 2017 and February 2019 were prospectively included. Analysis of variance was performed to evaluate the signal difference between MS lesions and normal-appearing white matter in patients with MS.Results: High signal intensity and corresponding T2* and T1 of the extracted myelin vesicles provided evidence for direct imaging of ultrashort-T2 myelin protons using the UTE sequence. Gadobenate dimeglumine phantoms with a wide range of T1 values were selectively suppressed with DESIRE UTE. In the ex vivo brain study of MS lesions, signal loss was observed in MS lesions and was conformed with histologic analysis. In the human study, there was a significant reduction in normalized signal intensity in MS lesions compared with that in normal-appearing white matter (0.19 6 0.10 vs 0.76 6 0.11, respectively; P , .001). Conclusion:The double-echo sliding inversion recovery ultrashort echo time sequence can generate whole-brain myelin images specifically with a clinical 3-T scanner.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.