Purpose: A method and computer tool to estimate percentage magnetic resonance (MR) imaging (MRI) breast density using three-dimensional T 1 -weighted MRI is introduced, and compared with mammographic percentage density [X-ray mammography (XRM)]. Materials and Methods: Ethical approval and informed consent were obtained. A method to assess MRI breast density as percentage volume occupied by watercontaining tissue on three-dimensional T 1 -weighted MR images is described and applied in a pilot study to 138 subjects who were imaged by both MRI and XRM during the Magnetic Resonance Imaging in Breast Screening study. For comparison, percentage mammographic density was measured from matching XRMs as a ratio of dense to total projection areas scored visually using a 21-point score and measured by applying a twodimensional interactive program (CUMULUS). The MRI and XRM percent methods were compared, including assessment of left-right and interreader consistency. Results: Percent MRI density correlated strongly (r = 0.78; P < 0.0001) with percent mammographic density estimated using Cumulus. Comparison with visual assessment also showed a strong correlation. The mammographic methods overestimate density compared with MRI volumetric assessment by a factor approaching 2. Discussion: MRI provides direct three-dimensional measurement of the proportion of water-based tissue in the breast. It correlates well with visual and computerized percent mammographic density measurements. This method may have direct application in women having breast cancer screening by breast MRI and may aid in determination of risk. (Cancer Epidemiol Biomarkers Prev 2008;17(9):2268 -74)
IntroductionMammographic breast density is one of the strongest known risk factors for breast cancer. We present a novel technique for estimating breast density based on 3D T1-weighted Magnetic Resonance Imaging (MRI) and evaluate its performance, including for breast cancer risk prediction, relative to two standard mammographic density-estimation methods.MethodsThe analyses were based on MRI (n = 655) and mammography (n = 607) images obtained in the course of the UK multicentre magnetic resonance imaging breast screening (MARIBS) study of asymptomatic women aged 31 to 49 years who were at high genetic risk of breast cancer. The MRI percent and absolute dense volumes were estimated using our novel algorithm (MRIBview) while mammographic percent and absolute dense area were estimated using the Cumulus thresholding algorithm and also using a 21-point Visual Assessment scale for one medio-lateral oblique image per woman. We assessed the relationships of the MRI and mammographic measures to one another, to standard anthropometric and hormonal factors, to BRCA1/2 genetic status, and to breast cancer risk (60 cases) using linear and Poisson regression.ResultsMRI percent dense volume is well correlated with mammographic percent dense area (R = 0.76) but overall gives estimates 8.1 percentage points lower (P < 0.0001). Both show strong associations with established anthropometric and hormonal factors. Mammographic percent dense area, and to a lesser extent MRI percent dense volume were lower in BRCA1 carriers (P = 0.001, P = 0.010 respectively) but there was no association with BRCA2 carrier status. The study was underpowered to detect expected associations between percent density and breast cancer, but women with absolute MRI dense volume in the upper half of the distribution had double the risk of those in the lower half (P = 0.009).ConclusionsThe MRIBview estimates of volumetric breast density are highly correlated with mammographic dense area but are not equivalent measures; the MRI absolute dense volume shows potential as a predictor of breast cancer risk that merits further investigation.
Abstract. In this study we investigated whether automatic refinement of manually segmented MR breast lesions improves the discrimination of benign and malignant breast lesions. A constrained maximum a-posteriori scheme was employed to extract the most probable lesion for a user-provided coarse manual segmentation. Standard shape, texture and contrast enhancement features were derived from both the manual and the refined segmentations for 10 benign and 16 malignant lesions and their discrimination ability was compared. The refined segmentations were more consistent than the manual segmentations from a radiologist and a non-expert. The automatic refinement was robust to inaccuracies of the manual segmentation. Classification accuracy improved on average from 69% to 82% after segmentation refinement.
Image registration is a very important procedure in medical imaging analysis. However, the intensive computations involved in image registration have to some extent made it impractical for interactive use as well as limiting its general availability. This paper presents our current Grid project to facilitate image registration tasks. We have set up an image registration Grid by combining the attractive features of both Globus and Condor distributed computing environments. In order to make it more convenient to use, we have also developed a web interface for potential clients to specify and submit their image registration jobs to the Grid. The initial experiments in 3D breast MR images have shown encouraging results and demonstrated the suitability of Grid technology to this type of computationally intensive applications. The image registration Grid makes it much more straightforward for different institutes to use the identical registration program and protocols to register images consistently, quickly and efficiently. This can greatly improve data sharing and comparative studies in multi-centre trials. The Grid presented here could be an important step for clinical applications of image registration. Future work will focus on refining the Grid with the aim of upgrading it to a Grid Service and testing the system more extensively with medical imaging dataset.Image registration is an important procedure in medical imaging analysis. The purpose of image registration is to align one image (source image) to another (target or reference image) so that the possible misalignments between them can be minimized or eliminated, thus establishing spatial correspondence. Proper registration enables us to have a better understanding of the features of interest and integration of useful information. Applications are wide: ranging from image-guided surgery to atlas construction, segmentation propagation, monitoring changes over time and dynamic sequence analysis. The general procedure of fully automated image registration typically requires optimization of a function of the similarity between the target and source images. A large variety of image registration methods have been proposed for medical applications. In practice, image registration can be further classified as rigid and non-rigid according to the underlying transformation model. The former registers images by assuming that images are misaligned by translations and rotations only while the latter allows many more degrees of freedom. Compared to rigid registration, non-rigid registration can theoretically model more complicated deformations but requires more computation [1,2]. Rigid registration is normally used to provide a starting estimate before non-rigid registration in order to reduce computational time. Even so non-rigid registration still requires massive computational time, e.g. an average computation time up to 5 hours has been reported to register 3-D MR images of a single breast with a typical region of
Due to the rapid progress in medical imaging technology, analysis of multivariate image data is receiving increased interest. However, their visual exploration is a challenging task since it requires the integration of information from many different sources which usually cannot be perceived at once by an observer.Image fusion techniques are commonly used to obtain information from multivariate image data, while psychophysical aspects of data visualization are usually not considered. Visualization is typically achieved by means of device derived color scales. With respect to psychophysical aspects of visualization, more sophisticated color mapping techniques based on device independent (and perceptually uniform) color spaces like CIELUV have been proposed. Nevertheless, the benefit of these techniques is limited by the fact that they require complex color space transformations to account for device characteristics and viewing conditions. In this paper we present a new framework for the visualization of multivariate image data using image fusion and color mapping techniques. In order to overcome problems of consistent image presentations and color space transformations, we propose perceptually optimized color scales based on CIELUV in combination with sRGB (IEC 61966-2-1) color specification. In contrast to color definitions based purely on CIELUV, sRGB data can be used directly under reasonable conditions, without complex transformations and additional information. In the experimental section we demonstrate the advantages of our approach in an application of these techniques to the visualization of DCE-MRI images from breast cancer research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.