Purpose
The purpose of this educational report is to provide an overview of the present state-of-the-art PET auto-segmentation (PET-AS) algorithms and their respective validation, with an emphasis on providing the user with help in understanding the challenges and pitfalls associated with selecting and implementing a PET-AS algorithm for a particular application.
Approach
A brief description of the different types of PET-AS algorithms is provided using a classification based on method complexity and type. The advantages and the limitations of the current PET-AS algorithms are highlighted based on current publications and existing comparison studies. A review of the available image datasets and contour evaluation metrics in terms of their applicability for establishing a standardized evaluation of PET-AS algorithms is provided. The performance requirements for the algorithms and their dependence on the application, the radiotracer used and the evaluation criteria are described and discussed. Finally, a procedure for algorithm acceptance and implementation, as well as the complementary role of manual and auto-segmentation are addressed.
Findings
A large number of PET-AS algorithms have been developed within the last 20 years. Many of the proposed algorithms are based on either fixed or adaptively selected thresholds. More recently, numerous papers have proposed the use of more advanced image analysis paradigms to perform semi-automated delineation of the PET images. However, the level of algorithm validation is variable and for most published algorithms is either insufficient or inconsistent which prevents recommending a single algorithm. This is compounded by the fact that realistic image configurations with low signal-to-noise ratios (SNR) and heterogeneous tracer distributions have rarely been used. Large variations in the evaluation methods used in the literature point to the need for a standardized evaluation protocol.
Conclusions
Available comparison studies suggest that PET-AS algorithms relying on advanced image analysis paradigms provide generally more accurate segmentation than approaches based on PET activity thresholds, particularly for realistic configurations. However, this may not be the case for simple shape lesions in situations with a narrower range of parameters, where simpler methods may also perform well. Recent algorithms which employ some type of consensus or automatic selection between several PET-AS methods have potential to overcome the limitations of the individual methods when appropriately trained. In either case, accuracy evaluation is required for each different PET scanner and scanning and image reconstruction protocol. For the simpler, less robust approaches, adaptation to scanning conditions, tumor type, and tumor location by optimization of parameters is necessary. The results from the method evaluation stage can be used to estimate the contouring uncertainty. All PET-AS contours should be critically verified by a physician. A standard test, i.e., a benchmark dedicated to ...
Purpose
The use of neural networks to directly predict three‐dimensional dose distributions for automatic planning is becoming popular. However, the existing methods use only patient anatomy as input and assume consistent beam configuration for all patients in the training database. The purpose of this work was to develop a more general model that considers variable beam configurations in addition to patient anatomy to achieve more comprehensive automatic planning with a potentially easier clinical implementation, without the need to train specific models for different beam settings.
Methods
The proposed anatomy and beam (AB) model is based on our newly developed deep learning architecture, and hierarchically densely connected U‐Net (HD U‐Net), which combines U‐Net and DenseNet. The AB model contains 10 input channels: one for beam setup and the other 9 for anatomical information (PTV and organs). The beam setup information is represented by a 3D matrix of the non‐modulated beam’s eye view ray‐tracing dose distribution. We used a set of images from 129 patients with lung cancer treated with IMRT with heterogeneous beam configurations (4–9 beams of various orientations) for training/validation (100 patients) and testing (29 patients). Mean squared error was used as the loss function. We evaluated the model’s accuracy by comparing the mean dose, maximum dose, and other relevant dose–volume metrics for the predicted dose distribution against those of the clinically delivered dose distribution. Dice similarity coefficients were computed to address the spatial correspondence of the isodose volumes between the predicted and clinically delivered doses. The model was also compared with our previous work, the anatomy only (AO) model, which does not consider beam setup information and uses only 9 channels for anatomical information.
Results
The AB model outperformed the AO model, especially in the low and medium dose regions. In terms of dose–volume metrics, AB outperformed AO by about 1–2%. The largest improvement was found to be about 5% in lung volume receiving a dose of 5Gy or more (V5). The improvement for spinal cord maximum dose was also important, that is, 3.6% for cross‐validation and 2.6% for testing. The AB model achieved Dice scores for isodose volumes as much as 10% higher than the AO model in low and medium dose regions and about 2–5% higher in high dose regions.
Conclusions
The AO model, which does not use beam configuration as input, can still predict dose distributions with reasonable accuracy in high dose regions but introduces large errors in low and medium dose regions for IMRT cases with variable beam numbers and orientations. The proposed AB model outperforms the AO model substantially in low and medium dose regions, and slightly in high dose regions, by considering beam setup information through a cumulative non‐modulated beam’s eye view ray‐tracing dose distribution. This new model represents a major step forward towards predicting 3D dose distributions in real clinical practices, where beam configu...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.