Image registration and fusion algorithms exist in almost every software system that creates or uses images in radiotherapy. Most treatment planning systems support some form of image registration and fusion to allow the use of multimodality and time-series image data and even anatomical atlases to assist in target volume and normal tissue delineation. Treatment delivery systems perform registration and fusion between the planning images and the in-room images acquired during the treatment to assist patient positioning. Advanced applications are beginning to support daily dose assessment and enable adaptive radiotherapy using image registration and fusion to propagate contours and accumulate dose between image data taken over the course of therapy to provide up-to-date estimates of anatomical changes and delivered dose. This information aids in the detection of anatomical and functional changes that might elicit changes in the treatment plan or prescription.As the output of the image registration process is always used as the input of another process for planning or delivery, it is important to understand and communicate the uncertainty associated with the software in general and the result of a specific registration. Unfortunately, there is no standard mathematical formalism to perform this for real-world situations where noise, distortion, and complex anatomical variations can occur. Validation of the software systems performance is also complicated by the lack of documentation available from commercial systems leading to use of these systems in undesirable 'black-box' fashion.In view of this situation and the central role that image registration and fusion play in treatment planning and delivery, the Therapy Physics Committee of the American Association of Physicists in Medicine commissioned Task Group 132 to review current approaches and solutions for image registration (both rigid and deformable) in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes.
As more pretreatment imaging becomes integrated into the treatment planning process and full three-dimensional image-guidance becomes part of the treatment delivery the need for a deformable image registration technique becomes more apparent. A novel finite element model-based multiorgan deformable image registration method, MORFEUS, has been developed. The basis of this method is twofold: first, individual organ deformation can be accurately modeled by deforming the surface of the organ at one instance into the surface of the organ at another instance and assigning the material properties that allow the internal structures to be accurately deformed into the secondary position and second, multi-organ deformable alignment can be achieved by explicitly defining the deformation of a subset of organs and assigning surface interfaces between organs. The feasibility and accuracy of the method was tested on MR thoracic and abdominal images of healthy volunteers at inhale and exhale. For the thoracic cases, the lungs and external surface were explicitly deformed and the breasts were implicitly deformed based on its relation to the lung and external surface. For the abdominal cases, the liver, spleen, and external surface were explicitly deformed and the stomach and kidneys were implicitly deformed. The average accuracy (average absolute error) of the lung and liver deformation, determined by tracking visible bifurcations, was 0.19 (s.d.: 0.09), 0.28 (s.d.: 0.12) and 0.17 (s.d.: 0.07) cm, in the LR, AP, and IS directions, respectively. The average accuracy of implicitly deformed organs was 0.11 (s.d.: 0.11), 0.13 (s.d.: 0.12), and 0.08 (s.d.: 0.09) cm, in the LR, AP, and IS directions, respectively. The average vector magnitude of the accuracy was 0.44 (s.d.: 0.20) cm for the lung and liver deformation and 0.24 (s.d.: 0.18) cm for the implicitly deformed organs. The two main processes, explicit deformation of the selected organs and finite element analysis calculations, require less than 120 and 495 s, respectively. This platform can facilitate the integration of deformable image registration into online image guidance procedures, dose calculations, and tissue response monitoring as well as performing multi-modality image registration for purposes of treatment planning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.