A major challenge for the estimation of crop traits (biophysical variables) from canopy reflectance is the creation of a high-quality training dataset. This can be addressed by using radiative transfer models (RTMs) to generate training dataset representing "real-world" data in situations with varying crop types and growth status as well as various observation configurations. However, this approach can lead to "ill-posed" problems related to assumptions in the sampling strategy and due to uncertainty in the model, resulting in unsatisfactory inversion results for retrieval of target variables. In order to address this problem, this research investigates a practical way to generate higher quality "synthetic" training data by integrating a crop growth model (CGM, in this case APSIM) with an RTM (in this case PROSAIL). This allows control of uncertainties of the RTM by imposing biological constraints on distribution and co-distribution of related variables. Subsequently, the method was theoretically validated on two types of synthetic dataset generated by PROSAIL or the coupling of APSIM and PROSAIL through comparing estimation precision for leaf area index (LAI), leaf chlorophyll content (Cab), leaf dry matter (Cm) and leaf water content (Cw). Additionally, the capabilities of current deep learning techniques using high spectral resolution hyperspectral data were investigated. The main findings include: (1) Feedforward neural network (FFNN) provided with appropriate configuration is a promising technique to retrieve crop traits from input features consisting of 1 nm-wide hyperspectral bands across 400-2500 nm range and observation configuration (solar and viewing angles), leading to a precise joint estimation for LAI (RMSE=0.061 m2 m-2), Cab (RMSE=1.42 μg cm-2), Cm (RMSE=0.000176 g cm-2) and Cw (RMSE=0.000319 g cm-2); (2) For the aim of model simplification, a narrower range in 400-1100 nm without observation configuration in input of FFNN model provided less precise estimation for LAI (RMSE=0.087 m2 m-2), Cab (RMSE=1.92 μg cm-2), Cm (RMSE=0.000299 g cm-2) and Cw (RMSE=0.001271 g cm-2); (3) The introduction of biological constraints in training datasets improved FFNN model performance in both average precision and stability, resulting in a much accurate estimation for LAI (RMSE=0.006 m2 m-2), Cab (RMSE=0.45 μg cm-2), Cm (RMSE=0.000039 g cm-2) and Cw (RMSE=0.000072 g cm-2), and this improvement could be further increased by enriching sample diversity in training dataset.
Purpose: To develop an accurate feature based 3D‐3D deformable registration method for patient‐specific motion model used in external beam radiation treatment of lung cancers based on a 4D computed tomography (4DCT) image set by utilizing unique features of the bifurcations of tubular organs. Method and Materials: Each 4DCT set consisted of 10 phases of 3DCT volumes during one breath cycle. A 3D tubular organ segmentation was first performed on each of the phases to extract the centerlines of bronchial trees, estimate radius of bronchial trees and automatically detect the bifurcation points by applying learning algorithm with specially designed filters. A novel deformable registration method was applied to minimize the distances of the corresponding bifurcation points between a target phase and a reference phase (e.g., between 0% phase and 50% phase) to capture the transformation between different phases. The results were evaluated by using volume and distance based estimators. Results: The learning method was trained and tested with positive and negative examples. The generalization error of the learning method was estimated using bootstrapping with the mean error rate 4.6%. The detailed quantitative and qualitative registration results are shown in the supporting materials. The mean distance estimator yielded results ranging from 1.93 mm to 4.46 mm between the corresponding points between the 0% phase images and the 50% phase images after the deformable registration. The root‐mean‐square error ranged from 1.99 mm to 5.13 mm. Conclusions: A novel and accurate 3D‐3D registration method based on the bifurcations of the tubular organs was developed to capture the transformation between the 3D CT images in the 4D computed tomography (4DCT) image sets. These preliminary results show that the proposed method is robust, fast and accurate for the deformable registration of the tubular organ in the lung.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.