In routine whole-body PET/MR hybrid imaging, attenuation correction (AC) is usually performed by segmentation methods based on a Dixon MR sequence providing up to 4 different tissue classes. Because of the lack of bone information with the Dixon-based MR sequence, bone is currently considered as soft tissue. Thus, the aim of this study was to evaluate a novel model-based AC method that considers bone in whole-body PET/MR imaging. Methods The new method (“Model”) is based on a regular 4-compartment segmentation from a Dixon sequence (“Dixon”). Bone information is added using a model-based bone segmentation algorithm, which includes a set of prealigned MR image and bone mask pairs for each major body bone individually. Model was quantitatively evaluated on 20 patients who underwent whole-body PET/MR imaging. As a standard of reference, CT-based μ-maps were generated for each patient individually by nonrigid registration to the MR images based on PET/CT data. This step allowed for a quantitative comparison of all μ-maps based on a single PET emission raw dataset of the PET/MR system. Volumes of interest were drawn on normal tissue, soft-tissue lesions, and bone lesions; standardized uptake values were quantitatively compared. Results In soft-tissue regions with background uptake, the average bias of SUVs in background volumes of interest was 2.4% ± 2.5% and 2.7% ± 2.7% for Dixon and Model, respectively, compared with CT-based AC. For bony tissue, the −25.5% ± 7.9% underestimation observed with Dixon was reduced to −4.9% ± 6.7% with Model. In bone lesions, the average underestimation was −7.4% ± 5.3% and −2.9% ± 5.8% for Dixon and Model, respectively. For soft-tissue lesions, the biases were 5.1% ± 5.1% for Dixon and 5.2% ± 5.2% for Model. Conclusion The novel MR-based AC method for whole-body PET/MR imaging, combining Dixon-based soft-tissue segmentation and model-based bone estimation, improves PET quantification in whole-body hybrid PET/MR imaging, especially in bony tissue and nearby soft tissue.
In general image recognition problems, discriminative information often lies in local image patches. For example, most human identity information exists in the image patches containing human faces. The same situation stays in medical images as well. "Bodypart identity" of a transversal slice-which bodypart the slice comes from-is often indicated by local image information, e.g., a cardiac slice and an aorta arch slice are only differentiated by the mediastinum region. In this work, we design a multi-stage deep learning framework for image classification and apply it on bodypart recognition. Specifically, the proposed framework aims at: 1) discover the local regions that are discriminative and non-informative to the image classification problem, and 2) learn a image-level classifier based on these local regions. We achieve these two tasks by the two stages of learning scheme, respectively. In the pre-train stage, a convolutional neural network (CNN) is learned in a multi-instance learning fashion to extract the most discriminative and and non-informative local patches from the training slices. In the boosting stage, the pre-learned CNN is further boosted by these local patches for image classification. The CNN learned by exploiting the discriminative local appearances becomes more accurate than those learned from global image context. The key hallmark of our method is that it automatically discovers the discriminative and non-informative local patches through multi-instance deep learning. Thus, no manual annotation is required. Our method is validated on a synthetic dataset and a large scale CT dataset. It achieves better performances than state-of-the-art approaches, including the standard deep CNN.
Simultaneous PET/MR of the brain is a promising new technology for characterizing patients with suspected cognitive impairment or epilepsy. Unlike CT though, MR signal intensities do not provide a direct correlate to PET photon attenuation correction (AC) and inaccurate radiotracer standard uptake value (SUV) estimation could limit future PET/MR clinical applications. We tested a novel AC method that supplements standard Dixon-based tissue segmentation with a superimposed model-based bone compartment. Methods We directly compared SUV estimation for MR-based AC methods to reference CT AC in 16 patients undergoing same-day, single 18FDG dose PET/CT and PET/MR for suspected neurodegeneration. Three Dixon-based MR AC methods were compared to CT – standard Dixon 4-compartment segmentation alone, Dixon with a superimposed model-based bone compartment, and Dixon with a superimposed bone compartment and linear attenuation correction optimized specifically for brain tissue. The brain was segmented using a 3D T1-weighted volumetric MR sequence and SUV estimations compared to CT AC for whole-image, whole-brain and 91 FreeSurfer-based regions-of-interest. Results Modifying the linear AC value specifically for brain and superimposing a model-based bone compartment reduced whole-brain SUV estimation bias of Dixon-based PET/MR AC by 95% compared to reference CT AC (P < 0.05) – this resulted in a residual −0.3% whole-brain mean SUV bias. Further, brain regional analysis demonstrated only 3 frontal lobe regions with SUV estimation bias of 5% or greater (P < 0.05). These biases appeared to correlate with high individual variability in the frontal bone thickness and pneumatization. Conclusion Bone compartment and linear AC modifications result in a highly accurate MR AC method in subjects with suspected neurodegeneration. This prototype MR AC solution appears equivalent than other recently proposed solutions, and does not require additional MR sequences and scan time. These data also suggest exclusively model-based MR AC approaches may be adversely affected by common individual variations in skull anatomy.
Vitamin D deficiency and oral diseases (periodontitis, caries, and tooth loss) are highly prevalent in Germany. Previous studies suggested that vitamin D might be a modifiable and protective factor for periodontitis, caries, and tooth loss. However, prospective studies investigating such associations are limited. We explored the association between the concentration of serum 25-hydroxy vitamin D (25OHD) and incidence of tooth loss, progression of clinical attachment loss (CAL) ≥ 3 mm, and progression of restorative and caries status in a population-based longitudinal study. We analyzed data from 1,904 participants from the Study of Health in Pomerania with a five-year follow-up. Generalized estimating equation models were applied to evaluate tooth-specific associations between serum 25OHD and incidence of tooth loss, progression of CAL ≥ 3 mm, and progression of restorative and caries status. Age, sex, education, smoking status, alcohol drinking, waist circumference, dental visit frequency, reasons of dental visit, vitamin D or calcium supplements, and season of blood draw were considered as confounders. Serum 25OHD was inversely associated with incidence of tooth loss. A significant dose-response relationship (p = .0022) was observed across the quintiles of serum 25OHD. After adjusting for multiple confounders, each 10-µg/L increase of serum 25OHD was associated with a 13% decreased risk of tooth loss (risk ratio: 0.87; 95% confidence interval: 0.79, 0.96). The association was attenuated for changes of CAL ≥ 3 mm when adjusting for multiple confounders. No significant association was found between serum 25OHD and caries progression. Vitamin D might be a protective factor for tooth loss. The effect might partially be mediated by its effect on periodontitis.
A novel statistical shape model is presented for automatic and accurate segmentation of prostate boundary from 3D ultrasound (US) images, using a hierarchical texture-based matching method. This method uses three steps. First, Gabor filter banks are used to capture rotation-invariant texture features at different scales and orientations. Second, different levels of texture features are integrated by a kernel support vector machine (KSVM) to optimally differentiate the prostate from surrounding tissues. Third, a statistical shape model is hierarchically deformed to the prostate boundary by robust texture and shape matching. Experimental results test the performance of the proposed method in segmenting 3D US prostate images.
T1-weighted image (T1WI) and T2-weighted image (T2WI) are the two routinely acquired magnetic resonance (MR) modalities that can provide complementary information for clinical and research usages. However, the relatively long acquisition time makes the acquired image vulnerable to motion artifacts. To speed up the imaging process, various algorithms have been proposed to reconstruct high-quality images from under-sampled k-space data. However, most of the existing algorithms only rely on mono-modality acquisition for the image reconstruction. In this paper, we propose to combine complementary MR acquisitions (i.e., T1WI and under-sampled T2WI particularly) to reconstruct the high-quality image (i.e., corresponding to the fully-sampled T2WI). To the best of our knowledge, this is the first work to fuse multi-modal MR acquisitions through deep learning to speed up the reconstruction of a certain target image. Specifically, we
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.