Purpose: The purpose of this study was to evaluate the performance of three common deformable image registration (DIR) packages across algorithms and institutions.
Methods and Materials:The Deformable Image Registration Evaluation Project (DIREP) provides ten virtual phantoms derived from computed tomography (CT) datasets of head-and-neck cancer patients over a single treatment course. Using the DIREP phantoms, DIR results from 35 institutions were submitted using either Velocity, MIM, or Eclipse. Submitted deformation vector fields (DVFs) were compared to ground-truth DVFs to calculate target registration error (TRE) for six regions of interest (ROIs). Statistical analysis was performed to determine the variability between each DIR software package and the variability of users within each algorithm.Results: Overall mean TRE was 2.04 ± 0.35 mm for Velocity, 1.10 ± 0.29 mm for MIM, and 2.35 ± 0.15 mm for Eclipse. The MIM mean TRE was significantly different than both Velocity and Eclipse for all ROIs. Velocity and Eclipse mean TREs were not significantly different except for when evaluating the registration of the cord or mandible. Significant differences between institutions were found for the MIM and Velocity platforms. However, these differences could be explained by variations in Velocity DIR parameters and MIM software versions.Conclusions: Average TRE was shown to be <3 mm for all three software platforms. However, maximum errors could be larger than 2 cm indicating that care should be exercised when using DIR. While MIM performed statistically better than the other packages, all evaluated algorithms had an average TRE better than the largest voxel dimension. For the phantoms studied here, significant differences between algorithm users were minimal suggesting that the algorithm used may have more impact on DIR accuracy than the particular registration technique employed. A significant difference in TRE was discovered between MIM versions showing that DIR QA should be performed after software upgrades as recommended by TG-132.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.