A method is presented to build a surrogate-driven motion model of a lung tumour from a cone-beam CT scan, which does not require markers. By monitoring an external surrogate in real time, it is envisaged that the motion model be used to drive gated or tracked treatments. The motion model would be built immediately before each fraction of treatment and can account for inter-fraction variation. The method could also provide a better assessment of tumour shape and motion prior to delivery of each fraction of stereotactic ablative radiotherapy. The two-step method involves enhancing the tumour region in the projections, and then fitting the surrogate-driven motion model. On simulated data, the mean absolute error was reduced to 1 mm. For patient data, errors were determined by comparing estimated and clinically identified tumour positions in the projections, scaled to mm at the isocentre. Averaged over all used scans, the mean absolute error was under 2.5 mm in superior-inferior and transverse directions.
Abstract. We propose a method to build a fully deformable motion model directly from conebeam CT (CBCT) projections. This allows inter-fraction variations in the respiratory motion to be accounted for. It is envisaged that the model be used to track the tumour, and monitor organs at risk (OAR), during gated or tracked radiotherapy (RT) treatment of lung cancer. The method is tested on CBCT projections from a simulated phantom in two cases. The simulations are generated from a patient respiratory trace and associated CBCT scanner geometry. Without and with motion correction, l 2 norm maximum errors were reduced from 24.5 to 0.698 mm in case 1, and 20.0 to 0.101 mm in case 2, respectively.
Object recognition is an essential capability when performing various tasks. Humans naturally use either or both visual and tactile perception to extract object class and properties. Typical approaches for robots, however, require complex visual systems or multiple high-density tactile sensors which can be highly expensive. In addition, they usually require actual collection of a large dataset from real objects through direct interaction. In this paper, we propose a kinesthetic-based object recognition method that can be performed with any multi-fingered robotic hand in which the kinematics is known. The method does not require tactile sensors and is based on observing grasps of the objects. We utilize a unique and frame invariant parameterization of grasps to learn instances of object shapes. To train a classifier, training data is generated rapidly and solely in a computational process without interaction with real objects. We then propose and compare between two iterative algorithms that can integrate any trained classifier. The classifiers and algorithms are independent of any particular robot hand and, therefore, can be exerted on various ones. We show in experiments, that with few grasps, the algorithms acquire accurate classification. Furthermore, we show that the object recognition approach is scalable to objects of various sizes. Similarly, a global classifier is trained to identify general geometries (e.g., an ellipsoid or a box) rather than particular ones and demonstrated on a large set of objects. Full scale experiments and analysis are provided to show the performance of the method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.